

Madison-Davis, LLC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in the Greater Philadelphia Area, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, SQL, AWS, and Apache Airflow. Financial services experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greater Philadelphia
-
🧠 - Skills detailed
#Data Pipeline #EC2 #Cloud #Scala #Data Warehouse #"ETL (Extract #Transform #Load)" #Data Modeling #dbt (data build tool) #GCP (Google Cloud Platform) #React #SQL (Structured Query Language) #Automation #Airflow #Batch #Apache Airflow #RDS (Amazon Relational Database Service) #Snowflake #Redshift #S3 (Amazon Simple Storage Service) #Data Framework #Data Transformations #Lambda (AWS Lambda) #BigQuery #Kubernetes #AWS (Amazon Web Services) #Security #Data Engineering #Observability #Python #Data Ingestion
Role description
Must be in Greater Philadelphia Area or reasonably able to commute.
Description
Our client is expanding its multi-asset engineering team and is seeking a Data Engineer to help design and deliver scalable data platforms that support a broad engineering and analytics community. This role sits at the intersection of cloud infrastructure, data pipelines, and platform enablement, with direct impact across multiple lines of business.
You’ll work closely with engineers, analysts, and stakeholders to build secure, resilient, and performant data solutions that power reporting, analytics, and downstream applications. This is a hands-on engineering role for someone who enjoys building foundational platforms rather than one-off data assets.
What You’ll Do
• Design, develop, and maintain scalable cloud-based data infrastructure and pipelines
• Build and optimize batch and event-driven data workflows using modern orchestration tools
• Develop reusable data frameworks and shared services to support a global engineering community
• Implement data transformations and models using analytics engineering best practices
• Partner with cross-functional teams to support data ingestion, processing, and consumption needs
• Ensure data reliability, performance, security, and observability across platforms
• Contribute to platform standards, automation, and continuous improvement initiatives
What You Bring
• Strong hands-on experience with Python for data engineering and automation
• Experience building and managing workflows using Apache Airflow
• Solid SQL skills, including performance tuning and complex query development
• Practical experience with cloud platforms, primarily AWS (e.g., S3, EC2, Lambda, Glue, RDS, Redshift)
• Experience with modern data transformation tools such as dbt
• Familiarity with data modeling concepts and analytics engineering patterns
• Ability to collaborate effectively in a distributed, cross-functional engineering environment
Nice to Have
• Experience with Snowflake or other cloud data warehouses
• Exposure to Kubernetes or containerized data platforms
• Experience with GCP services such as BigQuery
• Front-end or UI exposure (e.g., React) to support data-driven applications
• Prior experience within financial services or regulated environments
Must be in Greater Philadelphia Area or reasonably able to commute.
Description
Our client is expanding its multi-asset engineering team and is seeking a Data Engineer to help design and deliver scalable data platforms that support a broad engineering and analytics community. This role sits at the intersection of cloud infrastructure, data pipelines, and platform enablement, with direct impact across multiple lines of business.
You’ll work closely with engineers, analysts, and stakeholders to build secure, resilient, and performant data solutions that power reporting, analytics, and downstream applications. This is a hands-on engineering role for someone who enjoys building foundational platforms rather than one-off data assets.
What You’ll Do
• Design, develop, and maintain scalable cloud-based data infrastructure and pipelines
• Build and optimize batch and event-driven data workflows using modern orchestration tools
• Develop reusable data frameworks and shared services to support a global engineering community
• Implement data transformations and models using analytics engineering best practices
• Partner with cross-functional teams to support data ingestion, processing, and consumption needs
• Ensure data reliability, performance, security, and observability across platforms
• Contribute to platform standards, automation, and continuous improvement initiatives
What You Bring
• Strong hands-on experience with Python for data engineering and automation
• Experience building and managing workflows using Apache Airflow
• Solid SQL skills, including performance tuning and complex query development
• Practical experience with cloud platforms, primarily AWS (e.g., S3, EC2, Lambda, Glue, RDS, Redshift)
• Experience with modern data transformation tools such as dbt
• Familiarity with data modeling concepts and analytics engineering patterns
• Ability to collaborate effectively in a distributed, cross-functional engineering environment
Nice to Have
• Experience with Snowflake or other cloud data warehouses
• Exposure to Kubernetes or containerized data platforms
• Experience with GCP services such as BigQuery
• Front-end or UI exposure (e.g., React) to support data-driven applications
• Prior experience within financial services or regulated environments






