

LanceSoft, Inc.
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires 5+ years of experience, proficiency in ETL/ELT pipelines, Python, SQL, and cloud platforms, specifically in a manufacturing context.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
March 20, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Pittsburgh, PA
-
π§ - Skills detailed
#Pandas #Computer Science #Kafka (Apache Kafka) #GIT #Code Reviews #Data Lake #Data Profiling #DevOps #Process Automation #Oracle #Data Governance #Automation #Flask #Shell Scripting #Hadoop #Scala #KQL (Kusto Query Language) #PySpark #MySQL #Spark (Apache Spark) #IoT (Internet of Things) #Cloud #Data Quality #SQL (Structured Query Language) #Data Pipeline #Linux #Security #Compliance #"ETL (Extract #Transform #Load)" #Deployment #SaaS (Software as a Service) #Azure #Data Science #Data Engineering #Data Integration #Databases #API (Application Programming Interface) #GCP (Google Cloud Platform) #Python #Data Modeling #Apache Airflow #Big Data #Docker #Databricks #Time Series #Airflow #Scripting
Role description
Job Title
β’ Senior Data Engineer
β’ Hybrid/remote depending on project requirements
β’ Hybrid/remote depending on project requirements
Overview
We are seeking an experienced Data Engineer contractor to support our manufacturing operations. This individual will design, build, and optimize data pipelines and infrastructure, enabling advanced analytics, process automation, and data-driven decision-making. The Data Engineer will work closely with data scientist, process engineering, and IT teams to ensure data reliability and actionable insights across the manufacturing lifecycle.
Key Responsibilities
β’ Develop/maintain scalable and reliable data pipelines for industrial data (like real-time streaming, time series, IoT, , MES, ERP systems data)
β’ Integrate data from different sources (databases, clouds, on-premises) and Engineer sensors workflows for efficient ETL/ELT processing and data validation.
β’ Collaborate with architects, data engineers, data scientists, analysts, and business stakeholders to define and deliver solutions.
β’ Collaborate with IT admins, network/security engineers, and cross-functional teams to support stable production operations and troubleshoot infrastructure issues (including managing and integrating IaaC, PaaS, and SaaS solutions).
β’ Capable of managing backlog, supporting QA/testing, and communicating requirements with business stakeholders in the manufacturing domain.
β’ Mentoring team members, providing guidance, facilitating skill growth, offering technical coaching, and encouraging best practices across teams via code reviews.
β’ Build and maintain data infrastructure in compliance with data governance and security best practices
Requirements
β’ Bachelorβs degree in computer science or related fields with 5+ yearsβ experience as a Data Engineer.
β’ Strong experience in building, maintaining, and optimizing ETL/ELT data pipelines using Python, Pandas, PySpark and orchestrating workflows like Apache Airflow and Kedro framework.
β’ Advanced SQL/ KQL query development and optimization across Oracle, MSSQL, and MySQL databases (hosted on-premises or via PaaS offerings).
β’ Developing and consuming Flask-based and Fast API RESTful APIs for data services and integration.
β’ Proficiency in Linux shell scripting for automation and data workflow management.
β’ Experience with DevOps practices, including CI/CD for data pipelines and use of tools such as Git, Docker, and IaaC frameworks for provisioning and deployment.
β’ Hands-on experience deploying solutions across multiple clouds (OCI, Azure, GCP), including the setup of cross-cloud data integration and transfer techniques.
β’ Experience with cloud platforms (OCI, Azure, Google) and big data tools (Spark, Hadoop, Kafka, Databricks)
β’ Understanding data modeling, data profiling, data quality, data lake/warehouse architectures, and data indigestion from operational technologies.
β’ Familiarity with industrial protocols, time-series databases (like OSIsoft PI), and manufacturing data (MES, PLC)
β’ Strong troubleshooting, process automation, and root-cause analysis skills
Job Title
β’ Senior Data Engineer
β’ Hybrid/remote depending on project requirements
β’ Hybrid/remote depending on project requirements
Overview
We are seeking an experienced Data Engineer contractor to support our manufacturing operations. This individual will design, build, and optimize data pipelines and infrastructure, enabling advanced analytics, process automation, and data-driven decision-making. The Data Engineer will work closely with data scientist, process engineering, and IT teams to ensure data reliability and actionable insights across the manufacturing lifecycle.
Key Responsibilities
β’ Develop/maintain scalable and reliable data pipelines for industrial data (like real-time streaming, time series, IoT, , MES, ERP systems data)
β’ Integrate data from different sources (databases, clouds, on-premises) and Engineer sensors workflows for efficient ETL/ELT processing and data validation.
β’ Collaborate with architects, data engineers, data scientists, analysts, and business stakeholders to define and deliver solutions.
β’ Collaborate with IT admins, network/security engineers, and cross-functional teams to support stable production operations and troubleshoot infrastructure issues (including managing and integrating IaaC, PaaS, and SaaS solutions).
β’ Capable of managing backlog, supporting QA/testing, and communicating requirements with business stakeholders in the manufacturing domain.
β’ Mentoring team members, providing guidance, facilitating skill growth, offering technical coaching, and encouraging best practices across teams via code reviews.
β’ Build and maintain data infrastructure in compliance with data governance and security best practices
Requirements
β’ Bachelorβs degree in computer science or related fields with 5+ yearsβ experience as a Data Engineer.
β’ Strong experience in building, maintaining, and optimizing ETL/ELT data pipelines using Python, Pandas, PySpark and orchestrating workflows like Apache Airflow and Kedro framework.
β’ Advanced SQL/ KQL query development and optimization across Oracle, MSSQL, and MySQL databases (hosted on-premises or via PaaS offerings).
β’ Developing and consuming Flask-based and Fast API RESTful APIs for data services and integration.
β’ Proficiency in Linux shell scripting for automation and data workflow management.
β’ Experience with DevOps practices, including CI/CD for data pipelines and use of tools such as Git, Docker, and IaaC frameworks for provisioning and deployment.
β’ Hands-on experience deploying solutions across multiple clouds (OCI, Azure, GCP), including the setup of cross-cloud data integration and transfer techniques.
β’ Experience with cloud platforms (OCI, Azure, Google) and big data tools (Spark, Hadoop, Kafka, Databricks)
β’ Understanding data modeling, data profiling, data quality, data lake/warehouse architectures, and data indigestion from operational technologies.
β’ Familiarity with industrial protocols, time-series databases (like OSIsoft PI), and manufacturing data (MES, PLC)
β’ Strong troubleshooting, process automation, and root-cause analysis skills






