Senior AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer in McKinney, TX, offering a 12+ month contract at a hybrid work location. Requires 8+ years of experience, a Master's degree, and expertise in AWS tools, SQL, Python, and ETL development.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 29, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
McKinney, TX
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Computer Science #SNS (Simple Notification Service) #Databases #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Data Processing #SQL Queries #Redshift #Spark (Apache Spark) #Python #Scala #Data Pipeline #PySpark #Cloud #Pandas #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Data Engineering
Role description
Senior AWS Data Engineer McKinney, TX 12+ Months Contract Hybrid 3 days F2F Interview β€’ We’re hiring a Senior AWS Data Engineer with 8+ years of experience and a Master’s degree in Computer Science or related field. β€’ Atleast 5+ Years in Build and optimize robust data pipelines using SQL, Window Functions, Stored Procedures, and Python, incorporating multithreading for enhanced efficiency. β€’ Utilize 3+ years of experience with AWS tools, including Glue ETL Jobs, Lambda, S3, and Redshift, to manage large-scale data workflows. β€’ Extensive experience with AWS services, particularly Redshift (cluster management, cost optimization) and Lambda (serverless workflows). β€’ Must excel in ETL development, building scalable pipelines for structured/semi-structured data from diverse sources like APIs and databases. β€’ Expert in writing and optimizing complex SQL queries, including window functions, for high-volume data processing. β€’ Proficient in Python (boto3, pandas, UDFs) and PySpark for dynamic data frames and pipeline orchestration. β€’ Strong expertise in CDK for infrastructure-as-code to deploy secure, scalable systems. β€’ Experience in performance tuning and automating failure notifications (CloudWatch, SNS) is required. β€’ Seeking self-driven professionals to architect production-ready data pipelines. Join us to drive innovation in cloud-native data engineering!