

Milestone Technologies, Inc.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 10-month W2 contract, paying up to $53/hr. It requires 3+ years of experience with SQL, Python, and AWS services, alongside orchestration tools. Hybrid work is based in Burbank, CA.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
424
-
🗓️ - Date
January 27, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#Data Pipeline #Databricks #S3 (Amazon Simple Storage Service) #Airflow #Cloud #Snowflake #Debugging #IAM (Identity and Access Management) #Agile #Datasets #Monitoring #Redshift #Data Engineering #Security #Data Science #Data Architecture #Lambda (AWS Lambda) #Data Quality #AWS (Amazon Web Services) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Compliance #Informatica #Data Catalog
Role description
10-Month W2 Contract (No Visa Sponsorship/No C2C/No Student Sponsorship)
Rate up to $53/hr. (No PTO and No Paid Holidays)
Hybrid - 3 days onsite in Burbank (Only local candidates are being considered at this time)
Seeking a Data Engineer to be an integral member of the Platform POD, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product PODs, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Responsibilities
Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions)
Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Implement data models defined by architects into physical schemas.
Contribute to pipeline designs that align with canonical and semantic standards.
Collaborate with application PODs to deliver pipelines tailored to product features.
Apply validation rules and monitoring to detect and surface data quality issues.
Tag, document, and register new datasets in the enterprise data catalog.
Follow platform security and compliance practices (e.g., Lake Formation, IAM)
Actively participate in sprint ceremonies and backlog refinement.
Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
Promote reuse of pipelines and shared services across PODs.
Requirements
3+ years experience as a Data Engineer
3+ years hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3)
3+ years experience with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
Soft Skills
Strong problem-solving and debugging skills for pipeline operations.
Collaborative mindset - willingness to work in agile PODs and contribute to cross-functional outcomes.
Education
BA/BS or equivalent work experience
The estimated pay range for this position is USD $48.00/hr - USD $53.00/hr. Exact compensation and offers of employment are dependent on job-related knowledge, skills, experience, licenses or certifications, and location. We also offer comprehensive benefits. The Talent Acquisition Partner can share more details about compensation or benefits for the role during the interview process.
10-Month W2 Contract (No Visa Sponsorship/No C2C/No Student Sponsorship)
Rate up to $53/hr. (No PTO and No Paid Holidays)
Hybrid - 3 days onsite in Burbank (Only local candidates are being considered at this time)
Seeking a Data Engineer to be an integral member of the Platform POD, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product PODs, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Responsibilities
Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions)
Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Implement data models defined by architects into physical schemas.
Contribute to pipeline designs that align with canonical and semantic standards.
Collaborate with application PODs to deliver pipelines tailored to product features.
Apply validation rules and monitoring to detect and surface data quality issues.
Tag, document, and register new datasets in the enterprise data catalog.
Follow platform security and compliance practices (e.g., Lake Formation, IAM)
Actively participate in sprint ceremonies and backlog refinement.
Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
Promote reuse of pipelines and shared services across PODs.
Requirements
3+ years experience as a Data Engineer
3+ years hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3)
3+ years experience with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
Soft Skills
Strong problem-solving and debugging skills for pipeline operations.
Collaborative mindset - willingness to work in agile PODs and contribute to cross-functional outcomes.
Education
BA/BS or equivalent work experience
The estimated pay range for this position is USD $48.00/hr - USD $53.00/hr. Exact compensation and offers of employment are dependent on job-related knowledge, skills, experience, licenses or certifications, and location. We also offer comprehensive benefits. The Talent Acquisition Partner can share more details about compensation or benefits for the role during the interview process.





