Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in McKinney, TX, with a contract length of unspecified duration and a pay rate of "unknown." Requires 8+ years of experience, proficiency in SQL, AWS (Redshift, S3), and Python, and a Bachelor's degree in a related field.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 22, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
McKinney, TX
🧠 - Skills detailed
#Data Manipulation #Compliance #Airflow #S3 (Amazon Simple Storage Service) #Security #Computer Science #Storage #Data Governance #Version Control #Scripting #Data Modeling #Cloud #SQL (Structured Query Language) #Data Processing #Docker #SQL Queries #Redshift #AWS (Amazon Web Services) #Lambda (AWS Lambda) #EC2 #Data Architecture #Data Science #Data Analysis #GIT #Amazon Redshift #"ETL (Extract #Transform #Load)" #Python #Data Pipeline #Scala #Data Quality #Automation #Data Engineering
Role description

Job Title: Data Engineer

Location: McKinney TX

Job Summary:

We are looking for a skilled and motivated Data Engineer with expertise in SQL, AWS, Redshift, and Python to join our growing data team. The ideal candidate will be responsible for building and optimizing data pipelines, supporting data infrastructure, and ensuring the availability and integrity of data for analytics and business operations.

Key Responsibilities:

   • Design, build, and maintain scalable data pipelines and ETL processes.

   • Develop and optimize data models and warehouse structures in Amazon Redshift.

   • Write advanced SQL queries for data transformation, analysis, and reporting.

   • Leverage Python for scripting, automation, and custom data workflows.

   • Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable data solutions.

   • Manage and monitor data infrastructure using AWS services (S3, Lambda, Glue, Step Functions, EC2).

   • Ensure data quality, integrity, and governance across all pipelines and storage systems.

   • Troubleshoot and resolve data-related issues and performance bottlenecks.

Required Skills & Qualifications:

   • Bachelor's degree in Computer Science, Engineering, or a related field.

   • 8+ years of experience as a Data Engineer or similar role.

   • Strong proficiency in SQL for data manipulation and analysis.

   • Hands-on experience with AWS services, especially Redshift, S3, Lambda, Glue, and CloudWatch.

   • Solid experience in Python for scripting, data processing, and automation.

   • Understanding of data architecture, data modeling, and best practices in ETL/ELT.

   • Strong problem-solving skills and attention to detail.

Preferred Qualifications:

   • Experience with orchestration tools like Airflow or AWS Step Functions.

   • Familiarity with CI/CD pipelines and version control (Git).

   • Exposure to data governance, security, and compliance standards.

   • Experience with containerization tools like Docker is a plus.