Lead/Architect - Data Engineer (W2 Role)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead/Architect - Data Engineer (W2 Role) with a contract length of unspecified duration, offering a competitive pay rate. Candidates must have 5+ years of experience, strong SQL skills, and proficiency in AWS, Python, and PySpark. Remote work is available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 29, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Minnesota, United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Quality #Automation #Security #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Complex Queries #Compliance #Data Processing #SQL Queries #Redshift #Data Transformations #Spark (Apache Spark) #Python #Scripting #Scala #Data Pipeline #PySpark #Data Architecture #Big Data #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Data Engineering
Role description
Data Engineer (AWS, SQL, Python, PySpark) – Remote - US citizen only We’re seeking a hands-on Data Engineer to support a strategic enterprise data initiative. The right candidate will have strong technical expertise in building data pipelines, optimizing data systems, and especially in writing complex SQL queries for large-scale data environments. Key Responsibilities β€’ Design, develop, and maintain scalable data pipelines and infrastructure. β€’ Write and optimize complex SQL queries to support advanced analytics and reporting. β€’ Build and optimize data sets, data models, and ETL processes. β€’ Collaborate with cross-functional teams to understand business data needs and deliver insights. β€’ Ensure data quality, security, and compliance across systems. β€’ Support performance tuning, troubleshooting, and ongoing system improvements. β€’ Document processes and share knowledge across the engineering team. Key Skills & Experience β€’ 5+ years of professional experience in data engineering or related roles. β€’ Strong SQL expertise with proven ability to write and optimize complex queries. β€’ Hands-on experience with AWS data tools (Redshift, Glue, Lambda, S3). β€’ Proficiency in Python for scripting, automation, and data transformations. β€’ Experience with PySpark for distributed data processing (nice to have). β€’ Solid understanding of data warehousing, ETL development, and big data architectures. β€’ Strong communication skills and ability to collaborate across technical and business teams.