

Data Engineer (W2 Role) (US Citizen Only)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (W2, US Citizen Only) with a contract length of unspecified duration, offering a competitive pay rate. Key skills include AWS, SQL, Python, and PySpark, requiring 5+ years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Minnesota, United States
-
π§ - Skills detailed
#Redshift #Python #SQL (Structured Query Language) #Scripting #Data Architecture #"ETL (Extract #Transform #Load)" #Data Quality #Data Transformations #Data Pipeline #Big Data #Complex Queries #AWS (Amazon Web Services) #Scala #Automation #Security #Compliance #Data Engineering #SQL Queries #Lambda (AWS Lambda) #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #PySpark #Data Processing
Role description
Data Engineer (AWS, SQL, Python, PySpark) β Remote - US citizen only
Weβre seeking a hands-on Data Engineer to support a strategic enterprise data initiative. The right candidate will have strong technical expertise in building data pipelines, optimizing data systems, and especially in writing complex SQL queries for large-scale data environments.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and infrastructure.
β’ Write and optimize complex SQL queries to support advanced analytics and reporting.
β’ Build and optimize data sets, data models, and ETL processes.
β’ Collaborate with cross-functional teams to understand business data needs and deliver insights.
β’ Ensure data quality, security, and compliance across systems.
β’ Support performance tuning, troubleshooting, and ongoing system improvements.
β’ Document processes and share knowledge across the engineering team.
Key Skills & Experience
β’ 5+ years of professional experience in data engineering or related roles.
β’ Strong SQL expertise with proven ability to write and optimize complex queries.
β’ Hands-on experience with AWS data tools (Redshift, Glue, Lambda, S3).
β’ Proficiency in Python for scripting, automation, and data transformations.
β’ Experience with PySpark for distributed data processing (nice to have).
β’ Solid understanding of data warehousing, ETL development, and big data architectures.
β’ Strong communication skills and ability to collaborate across technical and business teams.
Data Engineer (AWS, SQL, Python, PySpark) β Remote - US citizen only
Weβre seeking a hands-on Data Engineer to support a strategic enterprise data initiative. The right candidate will have strong technical expertise in building data pipelines, optimizing data systems, and especially in writing complex SQL queries for large-scale data environments.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and infrastructure.
β’ Write and optimize complex SQL queries to support advanced analytics and reporting.
β’ Build and optimize data sets, data models, and ETL processes.
β’ Collaborate with cross-functional teams to understand business data needs and deliver insights.
β’ Ensure data quality, security, and compliance across systems.
β’ Support performance tuning, troubleshooting, and ongoing system improvements.
β’ Document processes and share knowledge across the engineering team.
Key Skills & Experience
β’ 5+ years of professional experience in data engineering or related roles.
β’ Strong SQL expertise with proven ability to write and optimize complex queries.
β’ Hands-on experience with AWS data tools (Redshift, Glue, Lambda, S3).
β’ Proficiency in Python for scripting, automation, and data transformations.
β’ Experience with PySpark for distributed data processing (nice to have).
β’ Solid understanding of data warehousing, ETL development, and big data architectures.
β’ Strong communication skills and ability to collaborate across technical and business teams.