

Cullerton Group
Data Engineer 3
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer 3, offering a 12-month contract (potentially extendable to 2 years), with a pay rate of up to $68/hr. Remote or hybrid work is available. Key skills include AWS, Snowflake, SQL, and Python; 5–7 years of data engineering experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
544
-
🗓️ - Date
April 8, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Lifecycle #SQS (Simple Queue Service) #Computer Science #DynamoDB #AWS (Amazon Web Services) #SageMaker #Data Quality #RDS (Amazon Relational Database Service) #Big Data #SNS (Simple Notification Service) #Snowflake #Python #Automation #Data Architecture #AWS SageMaker #Cloud #Data Pipeline #Data Management #Scala #Data Engineering #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Monitoring #Data Governance #Lambda (AWS Lambda)
Role description
Cullerton Group has a new opportunity for a Data Engineer 3. The work will be done remote (U.S. time zone) or onsite/hybrid depending on the customer’s preference across locations including Chicago, Dallas, Peoria, Champaign, or Westminster. This is a long-term position (12 months with potential extension up to 2 years) that can lead to permanent employment with our client. Compensation is up to $68/hr + full benefits (vision, dental, health insurance, 401k, and holiday pay).
Job Summary
The Data Engineer 3 will support the design, development, and maintenance of scalable data solutions, focusing on data quality, pipeline reliability, and cloud-based data architecture. This role involves working with modern technologies such as AWS and Snowflake to troubleshoot data issues, improve data pipelines, and build automation tools for monitoring and alerting. The engineer will collaborate with cross-functional teams to ensure high-quality, reliable data is delivered to support business operations. This position offers a mix of hands-on development, data operations, and problem-solving in a dynamic environment.
Key Responsibilities
• Design, develop, and maintain data pipelines, ensuring high data quality and reliability
• Troubleshoot and resolve data issues, including root cause analysis and break/fix solutions
• Develop scripts and automation tools using Python to detect and correct data issues
• Build monitoring and alerting systems to proactively identify data pipeline failures
• Collaborate with cross-functional teams to improve data architecture, governance, and performance
Required Qualifications
• Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience)
• 5–7 years of experience in data engineering, data management, or data operations
• Strong experience with AWS services (S3, RDS, Lambda, DynamoDB, SNS, SQS, etc.)
• Experience working with cloud data platforms such as Snowflake
• Strong proficiency in SQL and Python
• Experience with data pipelines, data quality, and data governance processes
Preferred Qualifications
• Experience with AWS SageMaker, CloudWatch, and advanced cloud architecture
• Strong Snowflake experience and data warehousing knowledge
• Experience supporting data operations or Tier 2 data support environments
• Understanding of end-to-end data lifecycle and big data technologies
• Ability to work in a fast-paced, collaborative, and cross-functional environment
Why This Role?
This position offers an opportunity to contribute to meaningful engineering and design work that supports a global leader in heavy machinery and manufacturing. Cullerton Group provides a professional environment with growth potential and a strong partnership with industry-leading organizations.
Cullerton Group has a new opportunity for a Data Engineer 3. The work will be done remote (U.S. time zone) or onsite/hybrid depending on the customer’s preference across locations including Chicago, Dallas, Peoria, Champaign, or Westminster. This is a long-term position (12 months with potential extension up to 2 years) that can lead to permanent employment with our client. Compensation is up to $68/hr + full benefits (vision, dental, health insurance, 401k, and holiday pay).
Job Summary
The Data Engineer 3 will support the design, development, and maintenance of scalable data solutions, focusing on data quality, pipeline reliability, and cloud-based data architecture. This role involves working with modern technologies such as AWS and Snowflake to troubleshoot data issues, improve data pipelines, and build automation tools for monitoring and alerting. The engineer will collaborate with cross-functional teams to ensure high-quality, reliable data is delivered to support business operations. This position offers a mix of hands-on development, data operations, and problem-solving in a dynamic environment.
Key Responsibilities
• Design, develop, and maintain data pipelines, ensuring high data quality and reliability
• Troubleshoot and resolve data issues, including root cause analysis and break/fix solutions
• Develop scripts and automation tools using Python to detect and correct data issues
• Build monitoring and alerting systems to proactively identify data pipeline failures
• Collaborate with cross-functional teams to improve data architecture, governance, and performance
Required Qualifications
• Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience)
• 5–7 years of experience in data engineering, data management, or data operations
• Strong experience with AWS services (S3, RDS, Lambda, DynamoDB, SNS, SQS, etc.)
• Experience working with cloud data platforms such as Snowflake
• Strong proficiency in SQL and Python
• Experience with data pipelines, data quality, and data governance processes
Preferred Qualifications
• Experience with AWS SageMaker, CloudWatch, and advanced cloud architecture
• Strong Snowflake experience and data warehousing knowledge
• Experience supporting data operations or Tier 2 data support environments
• Understanding of end-to-end data lifecycle and big data technologies
• Ability to work in a fast-paced, collaborative, and cross-functional environment
Why This Role?
This position offers an opportunity to contribute to meaningful engineering and design work that supports a global leader in heavy machinery and manufacturing. Cullerton Group provides a professional environment with growth potential and a strong partnership with industry-leading organizations.






