eBusiness Technologies Corp.

Sr AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr AWS Data Engineer in McKinney, TX, on a long-term contract with a pay rate of "unknown." Key skills include 5+ years in ETL development, AWS Glue, SQL, Redshift, and data governance. Hybrid work required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 31, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McKinney, TX
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #AWS Glue #Data Transformations #Spark (Apache Spark) #Python #AWS IAM (AWS Identity and Access Management) #XML (eXtensible Markup Language) #Automation #Amazon Redshift #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Warehouse #Lambda (AWS Lambda) #Security #Migration #Data Engineering #AI (Artificial Intelligence) #IAM (Identity and Access Management) #Cloud #Redshift #Data Governance #SQL Server #ML (Machine Learning) #Scala #SQL (Structured Query Language) #PySpark #Schema Design
Role description
Sr AWS DataEngineer McKinney, TX Hybrid 3 Days a week F2F Interview Long Term Contract santosh@ebusinesstechcorp.com • Seeking an AWS Data Engineer (5+ years) with strong ETL development expertise and experience in modernizing legacy data ecosystems (Mainframe, SQL Server, flat files, XML/CSV). • Proven ability to design and orchestrate large-scale data pipelines leveraging AWS Glue (PySpark/Python), Lambda, and event-driven ingestion patterns. • Expert-level SQL with proficiency in window functions, recursive CTEs, query plan analysis, and cost-based query optimization for high-volume data transformations. • Hands-on experience with Amazon Redshift internals – schema design, workload management (WLM), distribution/sort keys, Spectrum integration, and federated queries. • Skilled in AWS IAM security engineering, fine-grained access control, and enterprise-grade data governance in multi-account environments. • Proficiency in Python-based pipeline frameworks for modular, reusable ETL and automation of CI/CD for data pipelines. • Exposure to AWS Bedrock and AI/ML services to embed predictive/Generative AI capabilities into data engineering workflows. • Ability to architect resilient, scalable, and cost-optimized cloud-native data platforms, ensuring seamless migration of legacy workloads to Redshift Data Warehouse.