

Senior AWS Data Engineer (Python)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer (Python) in McKinney, TX, offering a 12+ month contract. Requires 7+ years of experience, a Master's degree, expertise in ETL, SQL, Python, AWS services, and cloud-native solutions. Hybrid work model.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 25, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
McKinney, TX
-
π§ - Skills detailed
#Data Engineering #Redshift #Data Pipeline #Cloud #IAM (Identity and Access Management) #AWS (Amazon Web Services) #Databases #Spark (Apache Spark) #Computer Science #REST (Representational State Transfer) #Data Processing #Scala #PySpark #SNS (Simple Notification Service) #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #SQL Queries #Lambda (AWS Lambda) #Pandas #Data Ingestion #REST API #Python
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior AWS Data Engineer (Python)
McKinney, TX
12+ Months Contract
Hybrid 3 days
F2F Interview
Interested? santosh@ebusinesstechcorp.com
β’ Weβre hiring a Senior AWS Data Engineer (Python) with 7+ years of experience and a Masterβs degree in Computer Science or related field.
β’ Must excel in ETL development, building scalable pipelines for structured/semi-structured data from diverse sources like APIs and databases.
β’ Expert in writing and optimizing complex SQL queries, including window functions, for high-volume data processing.
β’ Proficient in Python (boto3, pandas, UDFs) and PySpark for dynamic data frames and pipeline orchestration.
β’ Hands-on experience with AWS services (Lambda, Glue, Redshift, IAM, CDK, EventBridge) is essential.
β’ Skilled in REST APIs, data ingestion frameworks, and cloud-native patterns for robust data solutions.
β’ Strong expertise in CDK for infrastructure-as-code to deploy secure, scalable systems.
β’ Experience in performance tuning and automating failure notifications (CloudWatch, SNS) is required.
β’ Seeking self-driven professionals to architect production-ready data pipelines. Join us to drive innovation in cloud-native data engineering!