

Data Engineer
Job Title: Data Engineer
Location: McKinney TX
Job Summary:
We are looking for a skilled and motivated Data Engineer with expertise in SQL, AWS, Redshift, and Python to join our growing data team. The ideal candidate will be responsible for building and optimizing data pipelines, supporting data infrastructure, and ensuring the availability and integrity of data for analytics and business operations.
Key Responsibilities:
• Design, build, and maintain scalable data pipelines and ETL processes.
• Develop and optimize data models and warehouse structures in Amazon Redshift.
• Write advanced SQL queries for data transformation, analysis, and reporting.
• Leverage Python for scripting, automation, and custom data workflows.
• Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable data solutions.
• Manage and monitor data infrastructure using AWS services (S3, Lambda, Glue, Step Functions, EC2).
• Ensure data quality, integrity, and governance across all pipelines and storage systems.
• Troubleshoot and resolve data-related issues and performance bottlenecks.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 8+ years of experience as a Data Engineer or similar role.
• Strong proficiency in SQL for data manipulation and analysis.
• Hands-on experience with AWS services, especially Redshift, S3, Lambda, Glue, and CloudWatch.
• Solid experience in Python for scripting, data processing, and automation.
• Understanding of data architecture, data modeling, and best practices in ETL/ELT.
• Strong problem-solving skills and attention to detail.
Preferred Qualifications:
• Experience with orchestration tools like Airflow or AWS Step Functions.
• Familiarity with CI/CD pipelines and version control (Git).
• Exposure to data governance, security, and compliance standards.
• Experience with containerization tools like Docker is a plus.
Job Title: Data Engineer
Location: McKinney TX
Job Summary:
We are looking for a skilled and motivated Data Engineer with expertise in SQL, AWS, Redshift, and Python to join our growing data team. The ideal candidate will be responsible for building and optimizing data pipelines, supporting data infrastructure, and ensuring the availability and integrity of data for analytics and business operations.
Key Responsibilities:
• Design, build, and maintain scalable data pipelines and ETL processes.
• Develop and optimize data models and warehouse structures in Amazon Redshift.
• Write advanced SQL queries for data transformation, analysis, and reporting.
• Leverage Python for scripting, automation, and custom data workflows.
• Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable data solutions.
• Manage and monitor data infrastructure using AWS services (S3, Lambda, Glue, Step Functions, EC2).
• Ensure data quality, integrity, and governance across all pipelines and storage systems.
• Troubleshoot and resolve data-related issues and performance bottlenecks.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 8+ years of experience as a Data Engineer or similar role.
• Strong proficiency in SQL for data manipulation and analysis.
• Hands-on experience with AWS services, especially Redshift, S3, Lambda, Glue, and CloudWatch.
• Solid experience in Python for scripting, data processing, and automation.
• Understanding of data architecture, data modeling, and best practices in ETL/ELT.
• Strong problem-solving skills and attention to detail.
Preferred Qualifications:
• Experience with orchestration tools like Airflow or AWS Step Functions.
• Familiarity with CI/CD pipelines and version control (Git).
• Exposure to data governance, security, and compliance standards.
• Experience with containerization tools like Docker is a plus.