

TeamSoft
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 1-year contract, hybrid work in Lansing, MI, paying $60-65/hr. Requires 12+ years in database systems, 8+ years in Databricks, Elastic Search, Python/Scala, and AWS. Agile experience is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
March 10, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Lansing, MI
-
π§ - Skills detailed
#S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Azure DevOps #Requirements Gathering #Python #Data Integration #Oracle #Agile #Data Pipeline #Scala #Visualization #DevOps #Azure #GIT #Data Engineering #Database Systems #Databricks #"ETL (Extract #Transform #Load)" #Documentation #Data Integrity
Role description
Hybrid β Onsite 2 days/week (Lansing, MI)
Duration: 1 year + possible extension
Interview: Onsite required
Rate: $60-65/hr.
Summary
Seeking a senior Data Engineer to support and enhance a large-scale public health surveillance system. Role includes maintaining automated processes, ensuring data integrity, supporting modernization efforts, and serving as a technical lead for development teams. The engineer will design and develop scalable solutions, guide developers, and ensure technical standards and best practices are met.
Responsibilities
β’ Lead design and development of scalable, high-performance solutions using AWS services.
β’ Build and maintain solutions using Databricks, Elastic Search, Kibana, S3, and modern data engineering tools.
β’ Develop and optimize ETL processes and data pipelines.
β’ Write clean, efficient code in Python and Scala.
β’ Create and maintain logical and physical database models.
β’ Develop stored procedures, functions, and other database objects.
β’ Implement and manage Elastic Search for analytics and data retrieval.
β’ Support EHR/HL7 data integrations.
β’ Participate in full SDLC, including requirements gathering and technical documentation.
β’ Use GIT for source control and contribute to design documentation.
β’ Work with flowcharts, screen layouts, and system documentation to ensure logical system behavior.
β’ Contribute to large Agile projects with test-driven development practices.
β’ (Nice to Have) Build CI/CD pipelines using Azure DevOps.
Required Skills
β’ 12+ years developing complex database systems.
β’ 8+ years with Databricks.
β’ 8+ years with Elastic Search and Kibana.
β’ 8+ years coding in Python/Scala.
β’ 8+ years working with Oracle.
β’ 5+ years developing ETL processes and data pipelines.
β’ 5+ years with AWS.
β’ 5+ years in data warehousing, data visualization tools, and data integrity.
β’ 5+ years using CMM/CMMI Level 3 methods.
β’ 5+ years in Agile development with TDD.
β’ 3+ years (Nice to Have) creating CI/CD pipelines with Azure DevOps.
#INDTS
Peoplelink LLC, a leader in the staffing industry for the past 33 years, continues our vision of βlinkingβ communities through employment. TeamSoftβs dedication to the safety, health & well-being of our associates, clients and communities remains our #1 priority. TeamSoft is proud to be an EEOE, M/F/D/V, and we are committed to diversity both in practice and spirit at all levels of the organization.
Hybrid β Onsite 2 days/week (Lansing, MI)
Duration: 1 year + possible extension
Interview: Onsite required
Rate: $60-65/hr.
Summary
Seeking a senior Data Engineer to support and enhance a large-scale public health surveillance system. Role includes maintaining automated processes, ensuring data integrity, supporting modernization efforts, and serving as a technical lead for development teams. The engineer will design and develop scalable solutions, guide developers, and ensure technical standards and best practices are met.
Responsibilities
β’ Lead design and development of scalable, high-performance solutions using AWS services.
β’ Build and maintain solutions using Databricks, Elastic Search, Kibana, S3, and modern data engineering tools.
β’ Develop and optimize ETL processes and data pipelines.
β’ Write clean, efficient code in Python and Scala.
β’ Create and maintain logical and physical database models.
β’ Develop stored procedures, functions, and other database objects.
β’ Implement and manage Elastic Search for analytics and data retrieval.
β’ Support EHR/HL7 data integrations.
β’ Participate in full SDLC, including requirements gathering and technical documentation.
β’ Use GIT for source control and contribute to design documentation.
β’ Work with flowcharts, screen layouts, and system documentation to ensure logical system behavior.
β’ Contribute to large Agile projects with test-driven development practices.
β’ (Nice to Have) Build CI/CD pipelines using Azure DevOps.
Required Skills
β’ 12+ years developing complex database systems.
β’ 8+ years with Databricks.
β’ 8+ years with Elastic Search and Kibana.
β’ 8+ years coding in Python/Scala.
β’ 8+ years working with Oracle.
β’ 5+ years developing ETL processes and data pipelines.
β’ 5+ years with AWS.
β’ 5+ years in data warehousing, data visualization tools, and data integrity.
β’ 5+ years using CMM/CMMI Level 3 methods.
β’ 5+ years in Agile development with TDD.
β’ 3+ years (Nice to Have) creating CI/CD pipelines with Azure DevOps.
#INDTS
Peoplelink LLC, a leader in the staffing industry for the past 33 years, continues our vision of βlinkingβ communities through employment. TeamSoftβs dedication to the safety, health & well-being of our associates, clients and communities remains our #1 priority. TeamSoft is proud to be an EEOE, M/F/D/V, and we are committed to diversity both in practice and spirit at all levels of the organization.






