Milestone Technologies, Inc.

Senior Data Engineer - W2 Candidates Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7+ years of experience in data engineering and AWS services. Contract length is unspecified, with a pay rate of USD $75.00/hr - $79.00/hr. Location is remote. Key skills include SQL, Python, and modern data platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
March 7, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Airflow #S3 (Amazon Simple Storage Service) #Scripting #Lambda (AWS Lambda) #Databricks #RDS (Amazon Relational Database Service) #ML (Machine Learning) #PySpark #"ETL (Extract #Transform #Load)" #Metadata #Monitoring #AWS (Amazon Web Services) #Data Architecture #BI (Business Intelligence) #Deployment #Spark (Apache Spark) #SQL (Structured Query Language) #Data Catalog #Data Transformations #Informatica #Data Science #Batch #Python #Data Pipeline #Redshift #Cloud #DynamoDB #Anomaly Detection #Snowflake #Data Governance #AI (Artificial Intelligence) #Data Engineering
Role description
Summary The Senior Data Engineer plays a hands-on role within the Platform POD, ensuring data pipelines, integrations, and services are performant, reliable, and reusable. This role partners closely with Data Architects, Cloud Architects, and application PODs to deliver governed, AI/ML-ready data products. Responsibilities • Lead development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks. • Implement best practices for monitoring, resilience, and cost optimization in high-scale pipelines. • Collaborate with architects to translate canonical and semantic data models into physical implementations. • Build pipelines that deliver clean, well-structured data to analysts, BI tools, and ML pipelines. • Work with data scientists to enable feature engineering and deployment of ML models into production environments. • Embed validation, lineage, and anomaly detection into pipelines. • Contribute to the enterprise data catalog and enforce schema alignment across PODs. • Partner with governance teams to implement role-based access, tagging, and metadata standards. • Guide junior data engineers, sharing best practices in pipeline design and coding standards. • Participate in POD ceremonies (backlog refinement, sprint reviews) and program-level architecture syncs. • Promote reusable services and reduce fragmentation by advocating platform-first approaches. Requirements • 7+ years of experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions) • 7+ years of experience in SQL, Python, PySpark, and scripting for data transformations. • 7+ years of experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica) Soft Skills • Strong collaboration and mentoring skills; ability to influence across PODs and domains. • Knowledge of data governance practices, including lineage, validation, and cataloging. Technology • Strong skills in SQL, Python, PySpark, and scripting for data transformations. • Hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3) and orchestration tools (Airflow, Step Functions) • Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica) • Proven ability to optimize pipelines for both batch and streaming use cases. Education N/A The estimated pay range for this position is USD $75.00/hr - USD $79.00/hr. Exact compensation and offers of employment are dependent on job-related knowledge, skills, experience, licenses or certifications, and location. We also offer comprehensive benefits. The Talent Acquisition Partner can share more details about compensation or benefits for the role during the interview process.