

Signature IT World Inc
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Omaha, NE or Chicago, IL, with a long-term contract and a pay rate of "rate". Candidates should have 5–8 years of experience, advanced SQL skills, and proficiency in Python, cloud platforms, and CI/CD practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Omaha, NE
-
🧠 - Skills detailed
#GIT #Leadership #Snowflake #Data Accuracy #Data Science #Code Reviews #Automation #Informatica #SQL Queries #Data Modeling #DevOps #Python #Datasets #Scala #Cloud #Database Design #Agile #Data Pipeline #"ETL (Extract #Transform #Load)" #Documentation #Observability #Logging #Data Processing #Deployment #Automated Testing #Data Quality #Computer Science #Version Control #Data Engineering #SQL (Structured Query Language) #Data Architecture #Infrastructure as Code (IaC) #Terraform #dbt (data build tool) #Airflow #Talend #PySpark #Databricks #Spark (Apache Spark)
Role description
Job Title: Senior Data Engineer
Location: Omaha, NE 68102 / Chicago, IL
Duration: Long-Term
Job Description:
As a Senior Data Engineer, you will play a key role in leading the development, maintenance, and optimization of data pipelines and workflows within our Enterprise Data Platform. You’ll apply strong data engineering fundamentals along with software engineering and DevOps practices, so pipelines are built, deployed, and monitored as code. Your work will help ensure data accuracy, reliability, and accessibility, enabling teams across the organization to make informed decisions.
This position offers an opportunity to lead technical solutions, mentor engineers, and collaborate with cross-functional teams to solve complex data challenges and create impactful solutions.
Key Responsibilities:
• Lead the design, development, and maintenance of scalable data pipelines that process and integrate data from multiple sources into the Enterprise Data Platform.
• Build pipelines and workflows as code using modern engineering practices (version control, code reviews, automated testing, reusable components).
• Define and implement patterns for CI/CD for data pipelines (automated builds, tests, deployments, and environment promotion).
• Partner with data scientists, analysts, and business teams to gather requirements and translate them into robust data solutions.
• Build and optimize SQL queries and transformations to support complex business use cases and analytics needs.
• Design and manage data models; validate them with business stakeholders, data architects, and governance partners.
• Establish data quality checks, validation, and troubleshooting practices to ensure accuracy, consistency, and trust in data products.
• Monitor and optimize pipeline performance and reliability; implement observability (logging/metrics/alerts) and contribute to operational runbooks.
• Drive automation to improve efficiency, reduce manual effort, and increase repeatability of platform operations.
• Provide technical leadership through mentoring, reviews, and guidance on best practices and standards.
• Participate in Agile ceremonies to plan, estimate, and deliver work efficiently.
• Create and maintain documentation for data workflows, transformations, standards, and operational procedures.
Technical Skills:
• Bachelor’s degree in computer science, Information Systems, or a related field (or equivalent experience).
• 5–8 years of experience in data engineering or a related role.
• Advanced proficiency in SQL for complex data transformation and analysis.
• Hands-on experience with cloud-based data platforms such as Databricks, Snowflake, or similar tools.
• Experience with ETL/ELT tools and frameworks (e.g., Informatica, Talend, dbt, or equivalent).
• Strong proficiency in Python and/or PySpark for data processing and pipeline development.
• Strong understanding of data modeling, database design principles, and building curated datasets for analytics and operational use cases.
• Experience with DevOps practices and Git-based development (branching strategies, pull requests, code reviews).
• Experience implementing CI/CD for data pipelines/workflows and managing deployments across environments.
• CPG Domain Knowledge will be a plus.
• Familiarity with orchestration and workflow tools (e.g., Databricks Workflows, Airflow, or similar) is preferred.
• Familiarity with Infrastructure as Code (e.g., Terraform, CloudFormation) and/or containerization concepts is a plus.
• Strong problem-solving skills, attention to detail, and ability to troubleshoot complex issues end-to-end.
• Excellent communication skills and ability to collaborate across technical and non-technical teams.
Job Title: Senior Data Engineer
Location: Omaha, NE 68102 / Chicago, IL
Duration: Long-Term
Job Description:
As a Senior Data Engineer, you will play a key role in leading the development, maintenance, and optimization of data pipelines and workflows within our Enterprise Data Platform. You’ll apply strong data engineering fundamentals along with software engineering and DevOps practices, so pipelines are built, deployed, and monitored as code. Your work will help ensure data accuracy, reliability, and accessibility, enabling teams across the organization to make informed decisions.
This position offers an opportunity to lead technical solutions, mentor engineers, and collaborate with cross-functional teams to solve complex data challenges and create impactful solutions.
Key Responsibilities:
• Lead the design, development, and maintenance of scalable data pipelines that process and integrate data from multiple sources into the Enterprise Data Platform.
• Build pipelines and workflows as code using modern engineering practices (version control, code reviews, automated testing, reusable components).
• Define and implement patterns for CI/CD for data pipelines (automated builds, tests, deployments, and environment promotion).
• Partner with data scientists, analysts, and business teams to gather requirements and translate them into robust data solutions.
• Build and optimize SQL queries and transformations to support complex business use cases and analytics needs.
• Design and manage data models; validate them with business stakeholders, data architects, and governance partners.
• Establish data quality checks, validation, and troubleshooting practices to ensure accuracy, consistency, and trust in data products.
• Monitor and optimize pipeline performance and reliability; implement observability (logging/metrics/alerts) and contribute to operational runbooks.
• Drive automation to improve efficiency, reduce manual effort, and increase repeatability of platform operations.
• Provide technical leadership through mentoring, reviews, and guidance on best practices and standards.
• Participate in Agile ceremonies to plan, estimate, and deliver work efficiently.
• Create and maintain documentation for data workflows, transformations, standards, and operational procedures.
Technical Skills:
• Bachelor’s degree in computer science, Information Systems, or a related field (or equivalent experience).
• 5–8 years of experience in data engineering or a related role.
• Advanced proficiency in SQL for complex data transformation and analysis.
• Hands-on experience with cloud-based data platforms such as Databricks, Snowflake, or similar tools.
• Experience with ETL/ELT tools and frameworks (e.g., Informatica, Talend, dbt, or equivalent).
• Strong proficiency in Python and/or PySpark for data processing and pipeline development.
• Strong understanding of data modeling, database design principles, and building curated datasets for analytics and operational use cases.
• Experience with DevOps practices and Git-based development (branching strategies, pull requests, code reviews).
• Experience implementing CI/CD for data pipelines/workflows and managing deployments across environments.
• CPG Domain Knowledge will be a plus.
• Familiarity with orchestration and workflow tools (e.g., Databricks Workflows, Airflow, or similar) is preferred.
• Familiarity with Infrastructure as Code (e.g., Terraform, CloudFormation) and/or containerization concepts is a plus.
• Strong problem-solving skills, attention to detail, and ability to troubleshoot complex issues end-to-end.
• Excellent communication skills and ability to collaborate across technical and non-technical teams.






