

Data Engineer (W2)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (W2) on a remote contract for 7+ years of experience. Key skills include Python, SQL, Spark, AWS services, Airflow, and DBT, focusing on scalable data pipelines and ETL/ELT workflows.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Programming #AWS S3 (Amazon Simple Storage Service) #Observability #Scala #Redshift #Cloud #Data Pipeline #AWS (Amazon Web Services) #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Engineering #Data Quality #Data Science #Lambda (AWS Lambda) #dbt (data build tool) #Data Analysis #Data Lake #Python #Athena #SQL (Structured Query Language) #Airflow #S3 (Amazon Simple Storage Service)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Data Engineer
Location: Remote (USA)
Type: W2 Contract (Not open for Third party or C2C or 1099 arrangement).
Experience: 7+ Years
About the Role
We are looking for a skilled Data Engineer to design and build scalable data pipelines and infrastructure. In this role, you will work with modern data technologies including Spark, Python, SQL, and cloud-native tools on AWS (S3, Glue, Redshift, Athena, Lambda). You'll be responsible for implementing robust ETL/ELT workflows, ensuring data quality, governance, and collaborating closely with data analysts, scientists, and business teams to drive data-driven decision-making.
Ideal candidates will have strong hands-on experience with Airflow, DBT, and Lakehouse architectures, with a passion for building clean, reliable, and observable data solutions.
Responsibilities:
β’ Design, develop, and deploy scalable data pipelines using Spark, Python, and SQL.
β’ Work with AWS services such as S3, Glue, Redshift, Athena, Lambda.
β’ Implement ETL/ELT workflows using orchestration tools like Airflow.
β’ Ensure data quality, lineage, governance, and observability.
β’ Collaborate with analysts, data scientists, and business stakeholders.
Required Skills:
β’ Strong Python and SQL programming.
β’ Experience with Spark, Airflow, and DBT.
β’ Expertise in AWS and cloud-native data platforms.
β’ Familiarity with Data Lake/Lakehouse architectures and versioned data systems.