

Signature IT World Inc
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in Columbus, OH, focusing on building scalable ETL/ELT data pipelines. Required skills include SQL, Python, data warehousing, and experience with EPIC EMR and Informatica. A Bachelor's degree in a related field is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 14, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Columbus, OH
-
π§ - Skills detailed
#Data Analysis #Data Modeling #SQL (Structured Query Language) #Data Ingestion #Data Pipeline #Data Governance #Scala #ML (Machine Learning) #Security #API (Application Programming Interface) #Informatica #Azure #Cloud #GIT #Microsoft Azure #Deployment #Azure Data Factory #Compliance #AI (Artificial Intelligence) #Synapse #Data Integrity #Kafka (Apache Kafka) #Scripting #Data Engineering #Datasets #Python #Version Control #Automation #IICS (Informatica Intelligent Cloud Services) #Data Warehouse #Azure Event Hubs #Data Lake #Workday #Data Processing #ADF (Azure Data Factory) #Databases #Programming #Computer Science #"ETL (Extract #Transform #Load)" #Data Manipulation #Continuous Deployment
Role description
Job Role : Data Engineer
Location : Columbus, OH
Job Type: Contract W2
Job Description:
The Data Engineer will be a key builder on our AI journey, responsible for designing, constructing, and maintaining the data infrastructure required to support our AI initiatives. This role will focus on building robust and scalable data pipelines to extract data from a variety of sources, integrate it with our data lake/warehouse, and prepare it for analysis by our Data Analysts and training custom AI models. This position is critical for enabling our focus on vendor-provided capabilities and eventually building custom solutions.
Key Responsibilities:
β’ Design, build, and maintain scalable and efficient ETL/ELT data pipelines to ingest data from internal and external sources (e.g., APIs from EPIC, Workday, relational databases, flat files) and data warehouse to ensure data is clean, accessible, and ready for analysis and model training.
β’ Collaborate with the Data Analyst(s) and other stakeholders to understand their data requirements and provide them with clean, well-structured datasets.
β’ Implement data governance, security, and quality controls to ensure data integrity and compliance.
β’ Automate data ingestion, transformation, and validation processes.
β’ Work with our broader IT team to ensure seamless integration of data infrastructure with existing systems.
β’ Contribute to the evaluation and implementation of new data technologies and tools.
Required Skills & Qualifications:
β’ ETL/ELT Development: Strong experience in designing and building data pipelines using ETL/ELT tools and frameworks.
β’ SQL: Advanced proficiency in SQL for data manipulation, transformation, and optimization.
β’ Programming: Strong programming skills in Python (or a similar language) for scripting, automation, and data processing.
β’ Data Warehousing: Experience with data warehousing concepts and technologies.
β’ EPIC EMR: Experience with EPIC Health Systemβs databases (Chronicles, Clarity, or Caboodle) and ability to extract data and built ETL pipelines
β’ Informatica: Hands-on experience with Informatica ETL toolkit (IICS preferred)
β’ Problem-Solving: Proven ability to troubleshoot and resolve data pipeline issues.
β’ Data Modeling: Experience with various data modeling techniques (e.g., dimensional modeling).
β’ Real-time Processing: Familiarity with real-time data streaming technologies (e.g., Kafka, Azure Event Hubs).
β’ Education: Bachelor's degree in Computer Science, Engineering, or related field.
Nice-to-Have Skills:
β’ API Integration: Experience building data connectors and integrating with APIs from major enterprise systems (e.g., EPIC, Workday).
β’ Cloud Computing: Hands-on experience with at least one major cloud platform's data services (e.g., Microsoft Azure Data Factory, Azure Fabric, IICS).
β’ Version Control: Proficiency with Git for code management and collaboration.
β’ CI/CD: Knowledge of Continuous Integration/Continuous Deployment practices for data pipelines.
β’ AI/ML MLOps: A basic understanding of the machine learning lifecycle and how to build data pipelines to support model training and deployment.
β’ Experience with Microsoft Fabric: Direct experience with Microsoft Fabric's integrated data platform (OneLake, Data Factory, Synapse Data Engineering).
Job Role : Data Engineer
Location : Columbus, OH
Job Type: Contract W2
Job Description:
The Data Engineer will be a key builder on our AI journey, responsible for designing, constructing, and maintaining the data infrastructure required to support our AI initiatives. This role will focus on building robust and scalable data pipelines to extract data from a variety of sources, integrate it with our data lake/warehouse, and prepare it for analysis by our Data Analysts and training custom AI models. This position is critical for enabling our focus on vendor-provided capabilities and eventually building custom solutions.
Key Responsibilities:
β’ Design, build, and maintain scalable and efficient ETL/ELT data pipelines to ingest data from internal and external sources (e.g., APIs from EPIC, Workday, relational databases, flat files) and data warehouse to ensure data is clean, accessible, and ready for analysis and model training.
β’ Collaborate with the Data Analyst(s) and other stakeholders to understand their data requirements and provide them with clean, well-structured datasets.
β’ Implement data governance, security, and quality controls to ensure data integrity and compliance.
β’ Automate data ingestion, transformation, and validation processes.
β’ Work with our broader IT team to ensure seamless integration of data infrastructure with existing systems.
β’ Contribute to the evaluation and implementation of new data technologies and tools.
Required Skills & Qualifications:
β’ ETL/ELT Development: Strong experience in designing and building data pipelines using ETL/ELT tools and frameworks.
β’ SQL: Advanced proficiency in SQL for data manipulation, transformation, and optimization.
β’ Programming: Strong programming skills in Python (or a similar language) for scripting, automation, and data processing.
β’ Data Warehousing: Experience with data warehousing concepts and technologies.
β’ EPIC EMR: Experience with EPIC Health Systemβs databases (Chronicles, Clarity, or Caboodle) and ability to extract data and built ETL pipelines
β’ Informatica: Hands-on experience with Informatica ETL toolkit (IICS preferred)
β’ Problem-Solving: Proven ability to troubleshoot and resolve data pipeline issues.
β’ Data Modeling: Experience with various data modeling techniques (e.g., dimensional modeling).
β’ Real-time Processing: Familiarity with real-time data streaming technologies (e.g., Kafka, Azure Event Hubs).
β’ Education: Bachelor's degree in Computer Science, Engineering, or related field.
Nice-to-Have Skills:
β’ API Integration: Experience building data connectors and integrating with APIs from major enterprise systems (e.g., EPIC, Workday).
β’ Cloud Computing: Hands-on experience with at least one major cloud platform's data services (e.g., Microsoft Azure Data Factory, Azure Fabric, IICS).
β’ Version Control: Proficiency with Git for code management and collaboration.
β’ CI/CD: Knowledge of Continuous Integration/Continuous Deployment practices for data pipelines.
β’ AI/ML MLOps: A basic understanding of the machine learning lifecycle and how to build data pipelines to support model training and deployment.
β’ Experience with Microsoft Fabric: Direct experience with Microsoft Fabric's integrated data platform (OneLake, Data Factory, Synapse Data Engineering).