Senior Data Engineer (10+)- W2 Role

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (10+) on a contract basis, paying $60.00 - $70.00 per hour, requiring in-person work. Key skills include Python, PySpark, AWS (Glue, Redshift), Kafka, and 5+ years of cloud-based data engineering experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 7, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas, TX 75201
-
🧠 - Skills detailed
#AWS Glue #Jenkins #Schema Design #Lambda (AWS Lambda) #Kafka (Apache Kafka) #PySpark #GIT #Redshift #SQL (Structured Query Language) #Batch #Data Governance #Data Quality #Data Processing #Data Science #S3 (Amazon Simple Storage Service) #Monitoring #Scala #AWS (Amazon Web Services) #Apache Kafka #IAM (Identity and Access Management) #Version Control #"ETL (Extract #Transform #Load)" #Cloud #Data Pipeline #Data Warehouse #Data Lake #Data Engineering #Datasets #Spark (Apache Spark) #MySQL #Security #PostgreSQL #Python
Role description
We are seeking an experienced Data Engineer with strong expertise in Python, PySpark, AWS, Glue, Redshift, and Kafka. The ideal candidate will be responsible for designing and implementing large-scale data pipelines, ensuring data quality, and enabling real-time and batch data processing in a cloud environment. This is a highly technical role requiring strong problem-solving skills and the ability to collaborate with business and technology stakeholders. Key Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Python, PySpark, and AWS Glue. Build and manage data warehouses and data lakes on AWS (Redshift, S3, Glue Catalog). Implement real-time streaming pipelines using Apache Kafka. Optimize performance of data pipelines and queries for large datasets. Work closely with analysts, data scientists, and business teams to deliver clean, reliable, and well-structured data. Ensure data governance, quality, and security standards are followed. Automate data workflows and monitoring to improve reliability and efficiency. Troubleshoot and resolve complex data pipeline issues in production. Required Skills & Qualifications 5+ years of experience as a Data Engineer in cloud-based environments. Strong hands-on expertise with Python and PySpark for data transformation. Proven experience with AWS services (Glue, Redshift, S3, Lambda, EMR, IAM). Expertise in Kafka for real-time event streaming and processing. Solid knowledge of data warehousing concepts, schema design, and performance tuning. Strong experience with SQL (Redshift/PostgreSQL/MySQL). Knowledge of CI/CD pipelines and version control (Git, Jenkins, etc.). Excellent analytical and communication skills. Job Type: Contract Pay: $60.00 - $70.00 per hour Expected hours: 40 per week Work Location: In person