Madison-Davis, LLC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills required include Python, Apache Airflow, SQL, and AWS experience. Financial services industry experience is preferred. Work location is "remote."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greater Philadelphia
-
🧠 - Skills detailed
#Apache Airflow #Python #Data Warehouse #Data Transformations #Data Framework #Data Ingestion #Batch #BigQuery #RDS (Amazon Relational Database Service) #Redshift #Automation #Airflow #Lambda (AWS Lambda) #Data Engineering #S3 (Amazon Simple Storage Service) #React #EC2 #Observability #Scala #Security #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Kubernetes #SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Pipeline #Data Modeling #dbt (data build tool) #Cloud #Snowflake
Role description
Description Our client is expanding its multi-asset engineering team and is seeking a Data Engineer to help design and deliver scalable data platforms that support a broad engineering and analytics community. This role sits at the intersection of cloud infrastructure, data pipelines, and platform enablement, with direct impact across multiple lines of business. You’ll work closely with engineers, analysts, and stakeholders to build secure, resilient, and performant data solutions that power reporting, analytics, and downstream applications. This is a hands-on engineering role for someone who enjoys building foundational platforms rather than one-off data assets. What You’ll Do • Design, develop, and maintain scalable cloud-based data infrastructure and pipelines • Build and optimize batch and event-driven data workflows using modern orchestration tools • Develop reusable data frameworks and shared services to support a global engineering community • Implement data transformations and models using analytics engineering best practices • Partner with cross-functional teams to support data ingestion, processing, and consumption needs • Ensure data reliability, performance, security, and observability across platforms • Contribute to platform standards, automation, and continuous improvement initiatives What You Bring • Strong hands-on experience with Python for data engineering and automation • Experience building and managing workflows using Apache Airflow • Solid SQL skills, including performance tuning and complex query development • Practical experience with cloud platforms, primarily AWS (e.g., S3, EC2, Lambda, Glue, RDS, Redshift) • Experience with modern data transformation tools such as dbt • Familiarity with data modeling concepts and analytics engineering patterns • Ability to collaborate effectively in a distributed, cross-functional engineering environment Nice to Have • Experience with Snowflake or other cloud data warehouses • Exposure to Kubernetes or containerized data platforms • Experience with GCP services such as BigQuery • Front-end or UI exposure (e.g., React) to support data-driven applications • Prior experience within financial services or regulated environments