

Jobs via Dice
Senior DevOps / Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior DevOps / Data Engineer position, onsite in Coopersburg, PA, for a contract of unspecified length. Requires 12+ years of experience, strong skills in Terraform, CI/CD, ETL pipelines, and Python. Certifications in AWS preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Coopersburg, PA
-
🧠 - Skills detailed
#Python #Data Extraction #Libraries #Athena #Batch #RDS (Amazon Relational Database Service) #Computer Science #Redshift #DevOps #Migration #Data Engineering #Infrastructure as Code (IaC) #Terraform #Security #"ETL (Extract #Transform #Load)" #Consulting #Docker #Monitoring #ML (Machine Learning) #Kubernetes #AWS (Amazon Web Services) #BI (Business Intelligence) #Data Pipeline #Compliance #Cloud #SAP #NoSQL #Databases #Deployment
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, ITECS, is seeking the following. Apply via Dice today!
Position: Senior DevOps / Data Engineer
Work Location: Coopersburg, PA (Onsite)
Type of Employment: Contract
Experience Required: 12+ Years
Start Date: Immediate
Role Overview:
We are seeking a hybrid DevOps + Data Engineering profile, with a stronger emphasis on DevOps capabilities. The role owner will be required to modify & manage ETL pipelines as part of platform operations. The role owner should have hands on DevOps experience along with good understanding of data pipelines.
Key Result Areas and Activities
• Modify and manage data pipelines in DevOps setup
• Good understating of CI/CD pipelines
• Development responsibilities
• Software Asset Maintenance & Upgrades
• Upgrade and maintain third-party components such as ingress controllers, service meshes, monitoring agents, infrastructure libraries, and cloud-native tools.
• Apply the latest versions and security patches to ensure compliance, stability, and performance.
• Infrastructure as Code (IaC) Enhancements
• Update and enhance IaC scripts to support version upgrades across development, QA, and production environments.
• Validate changes through sandbox testing before deployment to production.
• Compatibility & Dependency Management
• Ensure upgraded components remain compatible with dependent services and applications.
• Identify and mitigate potential breaking changes or dependency conflicts.
• Application Code Adjustments
• Implement necessary code changes in supported languages (e.g., Python) to accommodate new versions or configuration requirements.
• Address minor and moderate changes required for compatibility with upgraded components.
• Update existing unit tests in response to the application code changes.
• Security & Compliance
• Apply immediate fixes for vulnerabilities.
• Maintain adherence to organizational security and governance guidelines.
• Testing & Validation
• Create and execute test strategies for validating the upgrades.
• Execute existing unit tests and manual test cases post-upgrade.
• Conduct functional testing of impacted applications to ensure end-to-end stability.
• Validate application behaviour after code changes and infrastructure updates.
• Reporting & Governance
• Provide weekly status reports detailing software versions, security posture, upgrade activities, and testing outcomes.
• Participate in regular reviews and acceptance processes.
Work And Technical Experience
Must-Have Skill Set
• Terraform
• CI/CD Pipelines
• Platform upgrades and maintenance
• QA exposure integration & platform testing
• Deep understanding of cloud data services (AWS) and migration strategies.
• Strong proficiency in ETL/ELT pipelines and framework development using Python. Development may not be needed but understanding is required
• Modifying & monitoring data pipelines for batch and real-time processing.
• Exceptional communication skills for engaging executives and non-technical stakeholders.
• Knowledge of containerization (Docker, Kubernetes) and orchestration for data workloads.
Good-to-Have Skill Set
• Certifications in cloud platforms (AWS) and data engineering.
• Experience with advanced analytics and machine learning pipelines.
• Prior consulting experience or leading large-scale data transformation programs.
• Knowledge of data extraction from SAP OData Services.
• Experience with multiple relational and NoSQL databases (Must have - Redshift, RDS and Athena).
• Experience with BI tools integration with enterprise data platforms.
Qualification:
• Bachelor's degree in computer science, engineering, or related field (master's degree is a plus)
• Demonstrated continued learning through one or more technical certifications or related methods
• At least 12+ years of relevant experience; two years may be substituted for a master's degree
Key expectations:
• Please submit a maximum of 3 quality submissions per week.
• Key Skills: strong hands-on experience with DevOps (Platform Upgrades, CI/CD Pipeline, IAC - Terraform) and Data Engineering (Managing ETL Pipelines, Python knowledge).
Dice is the leading career destination for tech experts at every stage of their careers. Our client, ITECS, is seeking the following. Apply via Dice today!
Position: Senior DevOps / Data Engineer
Work Location: Coopersburg, PA (Onsite)
Type of Employment: Contract
Experience Required: 12+ Years
Start Date: Immediate
Role Overview:
We are seeking a hybrid DevOps + Data Engineering profile, with a stronger emphasis on DevOps capabilities. The role owner will be required to modify & manage ETL pipelines as part of platform operations. The role owner should have hands on DevOps experience along with good understanding of data pipelines.
Key Result Areas and Activities
• Modify and manage data pipelines in DevOps setup
• Good understating of CI/CD pipelines
• Development responsibilities
• Software Asset Maintenance & Upgrades
• Upgrade and maintain third-party components such as ingress controllers, service meshes, monitoring agents, infrastructure libraries, and cloud-native tools.
• Apply the latest versions and security patches to ensure compliance, stability, and performance.
• Infrastructure as Code (IaC) Enhancements
• Update and enhance IaC scripts to support version upgrades across development, QA, and production environments.
• Validate changes through sandbox testing before deployment to production.
• Compatibility & Dependency Management
• Ensure upgraded components remain compatible with dependent services and applications.
• Identify and mitigate potential breaking changes or dependency conflicts.
• Application Code Adjustments
• Implement necessary code changes in supported languages (e.g., Python) to accommodate new versions or configuration requirements.
• Address minor and moderate changes required for compatibility with upgraded components.
• Update existing unit tests in response to the application code changes.
• Security & Compliance
• Apply immediate fixes for vulnerabilities.
• Maintain adherence to organizational security and governance guidelines.
• Testing & Validation
• Create and execute test strategies for validating the upgrades.
• Execute existing unit tests and manual test cases post-upgrade.
• Conduct functional testing of impacted applications to ensure end-to-end stability.
• Validate application behaviour after code changes and infrastructure updates.
• Reporting & Governance
• Provide weekly status reports detailing software versions, security posture, upgrade activities, and testing outcomes.
• Participate in regular reviews and acceptance processes.
Work And Technical Experience
Must-Have Skill Set
• Terraform
• CI/CD Pipelines
• Platform upgrades and maintenance
• QA exposure integration & platform testing
• Deep understanding of cloud data services (AWS) and migration strategies.
• Strong proficiency in ETL/ELT pipelines and framework development using Python. Development may not be needed but understanding is required
• Modifying & monitoring data pipelines for batch and real-time processing.
• Exceptional communication skills for engaging executives and non-technical stakeholders.
• Knowledge of containerization (Docker, Kubernetes) and orchestration for data workloads.
Good-to-Have Skill Set
• Certifications in cloud platforms (AWS) and data engineering.
• Experience with advanced analytics and machine learning pipelines.
• Prior consulting experience or leading large-scale data transformation programs.
• Knowledge of data extraction from SAP OData Services.
• Experience with multiple relational and NoSQL databases (Must have - Redshift, RDS and Athena).
• Experience with BI tools integration with enterprise data platforms.
Qualification:
• Bachelor's degree in computer science, engineering, or related field (master's degree is a plus)
• Demonstrated continued learning through one or more technical certifications or related methods
• At least 12+ years of relevant experience; two years may be substituted for a master's degree
Key expectations:
• Please submit a maximum of 3 quality submissions per week.
• Key Skills: strong hands-on experience with DevOps (Platform Upgrades, CI/CD Pipeline, IAC - Terraform) and Data Engineering (Managing ETL Pipelines, Python knowledge).






