

Compunnel Inc.
Lead Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer on a contract basis for 6 months, paying $40.00 - $42.00 per hour. Located in Lisle, IL, it requires expertise in SAP Hana, data migration, Python, and data governance, with a focus on data extraction and cleansing.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
336
-
ποΈ - Date
March 14, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Lisle, IL 60532
-
π§ - Skills detailed
#SAP Hana #Data Mapping #Documentation #SAP #PeopleSoft #Data Governance #Visualization #Migration #ADLS (Azure Data Lake Storage) #Spark (Apache Spark) #AWS Glue #Data Pipeline #Cloud #"ETL (Extract #Transform #Load)" #Data Analysis #Data Cleansing #PySpark #SQL (Structured Query Language) #Synapse #Informatica IDQ (Informatica Data Quality) #Data Quality #Data Conversion #Azure Data Factory #Informatica #Azure #Data Management #ADF (Azure Data Factory) #Data Extraction #Talend #AWS (Amazon Web Services) #Data Migration #Data Engineering #Python #MDM (Master Data Management) #Pandas
Role description
Job Role Title: Data Engineer / LeadLocation : Lisle, ILLocal to Chicago, 4 days in office
Job Description:As a Data Engineer Lead, you will play a critical role in ensuring the accuracy and reliability of our data. You will be responsible for selecting appropriate tools, performing technical data extraction, writing programs, and managing data from various sources, including PeopleSoft, BAAN, Infor LN. You will leverage cutting-edge analytics to unlock the potential within our data, guide critical business decisions and propel sustainable growth. Your expertise will help us maintain clean and usable data, free from duplication and errors.
Key Responsibilities
Lead the data conversion team through scoping, extraction, mapping, and loading of both the master data sets & transactional data sets from multiple sources into the SAP system through the project phases.
Design, develop, and implement components to migrate data from legacy platforms to SAP ERP platforms.
Define development and test templates and methods for extracting data from various sources, with a strong emphasis on PeopleSoft, BAAN, Infor LN systems.
Evaluate and select appropriate tools and technologies for data extraction and cleansing.
Write programs and scripts to automate data extraction, cleansing, transform and load processes.
Identify and resolve data duplication issues. Perform data cleansing to ensure the accuracy and quality of data. Participate in governance for preventive measures due to duplication at source.
Create and maintain data mapping documents to map source data attributes to Business partner (customer/supplier) attributes.
Monitor Data quality issues and refine data quality rules and checks as needed.
Collaborate with cross-functional teams, technology, and architecture groups to understand data requirements and implement data-centric solutions.
Document processes, methodologies, and best practices for data extraction and cleansing.
Monitor data quality and continuously improve data management processes.
Minimum Requirements
6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture.
Experience deploying modern data solutions leveraging components like Azure functions, Azure Data Factory or AWS Glue
Experience with data migration platforms like Syniti or data stage
Experience with at least one of the data quality products, like Informatica data quality, Talend DQ, or Atacama.
Experience with implementing SAP ERP solution (at least one or two implementations), hands-on experience with SAP Hana
Preferred Qualifications (What experience youβd like to see candidates have)
Proven experience in data extraction, data cleansing, and data management.
Strong Proficiency in Python, Spark, PySpark, PLSQL, Pandas or similar.
Hands-on experience in Syniti, Azure Data Factory, Synapse, ADLS and/or AWS Glue and other AWS data services
Design, develop, optimize, and maintain data pipelines in Azure/AWS Cloud Platforms
Exhibit a thorough understanding of data quality, master data management concepts and ETL/ELT operations.
Solid experience developing and maintaining data quality dashboards.
Solid experience maintaining data quality issue tracker and following up with data stewards/or data owners to fix critical issues.
Strong data analysis background, including the planning and hands on execution of data cleansing and complex data conversions.
Ability to develop reports and analyze, parse and manipulate data using SQL and tools such as Excel, Access and SQL Loader
Strong problem-solving skills and attention to detail.
Ability to work independently and as part of a team.
Excellent communication and documentation skills.
Experience with data visualization tools and techniques. Familiarity with data warehousing processes.
Knowledge of data governance and data quality best practices. TA01
Job Type: Contract
Pay: $40.00 - $42.00 per hour
Expected hours: 40 per week
Benefits:
Health insurance
Work Location: Hybrid remote in Lisle, IL 60532
Job Role Title: Data Engineer / LeadLocation : Lisle, ILLocal to Chicago, 4 days in office
Job Description:As a Data Engineer Lead, you will play a critical role in ensuring the accuracy and reliability of our data. You will be responsible for selecting appropriate tools, performing technical data extraction, writing programs, and managing data from various sources, including PeopleSoft, BAAN, Infor LN. You will leverage cutting-edge analytics to unlock the potential within our data, guide critical business decisions and propel sustainable growth. Your expertise will help us maintain clean and usable data, free from duplication and errors.
Key Responsibilities
Lead the data conversion team through scoping, extraction, mapping, and loading of both the master data sets & transactional data sets from multiple sources into the SAP system through the project phases.
Design, develop, and implement components to migrate data from legacy platforms to SAP ERP platforms.
Define development and test templates and methods for extracting data from various sources, with a strong emphasis on PeopleSoft, BAAN, Infor LN systems.
Evaluate and select appropriate tools and technologies for data extraction and cleansing.
Write programs and scripts to automate data extraction, cleansing, transform and load processes.
Identify and resolve data duplication issues. Perform data cleansing to ensure the accuracy and quality of data. Participate in governance for preventive measures due to duplication at source.
Create and maintain data mapping documents to map source data attributes to Business partner (customer/supplier) attributes.
Monitor Data quality issues and refine data quality rules and checks as needed.
Collaborate with cross-functional teams, technology, and architecture groups to understand data requirements and implement data-centric solutions.
Document processes, methodologies, and best practices for data extraction and cleansing.
Monitor data quality and continuously improve data management processes.
Minimum Requirements
6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture.
Experience deploying modern data solutions leveraging components like Azure functions, Azure Data Factory or AWS Glue
Experience with data migration platforms like Syniti or data stage
Experience with at least one of the data quality products, like Informatica data quality, Talend DQ, or Atacama.
Experience with implementing SAP ERP solution (at least one or two implementations), hands-on experience with SAP Hana
Preferred Qualifications (What experience youβd like to see candidates have)
Proven experience in data extraction, data cleansing, and data management.
Strong Proficiency in Python, Spark, PySpark, PLSQL, Pandas or similar.
Hands-on experience in Syniti, Azure Data Factory, Synapse, ADLS and/or AWS Glue and other AWS data services
Design, develop, optimize, and maintain data pipelines in Azure/AWS Cloud Platforms
Exhibit a thorough understanding of data quality, master data management concepts and ETL/ELT operations.
Solid experience developing and maintaining data quality dashboards.
Solid experience maintaining data quality issue tracker and following up with data stewards/or data owners to fix critical issues.
Strong data analysis background, including the planning and hands on execution of data cleansing and complex data conversions.
Ability to develop reports and analyze, parse and manipulate data using SQL and tools such as Excel, Access and SQL Loader
Strong problem-solving skills and attention to detail.
Ability to work independently and as part of a team.
Excellent communication and documentation skills.
Experience with data visualization tools and techniques. Familiarity with data warehousing processes.
Knowledge of data governance and data quality best practices. TA01
Job Type: Contract
Pay: $40.00 - $42.00 per hour
Expected hours: 40 per week
Benefits:
Health insurance
Work Location: Hybrid remote in Lisle, IL 60532






