eBusiness Technologies Corp.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in McKinney, TX, with a long-term contract offering a competitive pay rate. Requires 10+ years in Data Engineering, 4+ years in AWS, and expertise in SQL, Python, ETL, Redshift, and Glue.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McKinney, TX
-
🧠 - Skills detailed
#Schema Design #Migration #Python #Data Transformations #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #SQL Server #Airflow #Security #AI (Artificial Intelligence) #Lambda (AWS Lambda) #AWS IAM (AWS Identity and Access Management) #Scala #IAM (Identity and Access Management) #XML (eXtensible Markup Language) #ML (Machine Learning) #Data Governance #Data Engineering #PySpark #SQL (Structured Query Language) #Spark (Apache Spark) #Redshift #AWS Glue #Cloud #Data Warehouse #Amazon Redshift #Automation #Data Pipeline
Role description
Sr AWS Data Engineer McKinney, TX Day 1 Onsite F2F Interview Long Term Contract Job Description: • 10+ years in Data Engineering, 4+ years in AWS — deep expertise in SQL (CTEs, window functions), Python, ETL pipeline design, Redshift, Glue, and Airflow. Pure AWS stack only. • Seeking an AWS Data Engineer with strong ETL development expertise and experience in modernizing legacy data ecosystems (Mainframe, SQL Server, flat files, XML/CSV). • Proven ability to design and orchestrate large-scale data pipelines leveraging AWS Glue (PySpark/Python), Lambda, and event-driven ingestion patterns. • Expert-level SQL with proficiency in window functions, recursive CTEs, query plan analysis, and cost-based query optimization for high-volume data transformations. • Hands-on experience with Amazon Redshift internals – schema design, workload management (WLM), distribution/sort keys, Spectrum integration, and federated queries. • Skilled in AWS IAM security engineering, fine-grained access control, and enterprise-grade data governance in multi-account environments. • Proficiency in Python-based pipeline frameworks for modular, reusable ETL and automation of CI/CD for data pipelines. • Exposure to AWS Bedrock and AI/ML services to embed predictive/Generative AI capabilities into data engineering workflows. • Ability to architect resilient, scalable, and cost-optimized cloud-native data platforms, ensuring seamless migration of legacy workloads to Redshift Data Warehouse.