

Big Data ETL Specialist - ON SITE
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data ETL Specialist in Columbus, Ohio, with a contract length of unspecified duration. Pay is up to $58/hr. Required skills include 8+ years in Big Data, Hadoop, ETL/ELT, and strong SQL experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Columbus, Ohio Metropolitan Area
-
π§ - Skills detailed
#Cloud #SAS #HDFS (Hadoop Distributed File System) #Documentation #Consulting #Project Management #Hadoop #Data Integration #"ETL (Extract #Transform #Load)" #Scripting #StreamSets #Data Analysis #BI (Business Intelligence) #Security #Data Security #Unit Testing #YARN (Yet Another Resource Negotiator) #Data Warehouse #Unix #Python #Big Data #PySpark #Migration #UAT (User Acceptance Testing) #Quality Assurance #Oracle #Kudu #Data Profiling #EDW (Enterprise Data Warehouse) #Data Quality #Datasets #Zookeeper #Spark (Apache Spark) #Cloudera #SQL (Structured Query Language) #Kafka (Apache Kafka) #Data Ingestion #Linux #Sqoop (Apache Sqoop) #SQL Queries #Leadership #Data Governance #Agile #Shell Scripting #Deployment #Metadata #Impala
Role description
Company and Role:
Dedicated Tech Services, Inc. (DTS) is an award-winning IT consulting firm based in Columbus, OH. We now have an opening for a Big Data ETL Specialist.
Highlights and Benefits:
β’ Onsite β Columbus, Ohio
β’ W2 hourly pay rate up to $58hr or salaried equivalent
β’ Government Environment
β’ Direct W2 hourly or salaried applicants only (no corp-to-corp subcontractors, third parties, or agencies)
β’ Paid time off and holidays for salaried employees
β’ 401K, billable bonus, and health, life, vision, dental and short-term disability insurance options for all
β’ DTS is a proud Women Business Enterprise (WBE) and Woman Owned Small Business (WOSB)!
β’ Check out our benefits and company information at www.dtsdelivers.com!
Job Description:
We are hiring an experienced Big Data ETL Specialist to work for us as our direct, W2 salaried or hourly employee to join our team. You will: Be responsible for Medicaid Enterprise Data Warehouse design, development, implementation, migration, maintenance and operation activities. Works closely with Data Governance and Analytics team. Will be one of the key technical resource for data warehouse projects for various Enterprise Data Warehouse projects and building critical Datamarts, data ingestion to Big Data platform for data analytics and exchange with State and Medicaid partners. This position is a member of Medicaid ITS and works closely with the Business Intelligence & Data Analytics team.
Responsibilities:
β’ Participate in Team activities, Design discussions, Stand up meetings and planning Review with team.
β’ Perform data analysis, data profiling, data quality and data ingestion in various layers using big data/Hadoop/Hive/Impala queries, PySpark programs and UNIX shell scripts.
β’ Follow the organization coding standard document, Create mappings, sessions and workflows as per the mapping specification document.
β’ Perform Gap and impact analysis of ETL and IOP jobs for the new requirement and enhancements.
β’ Create jobs in Hadoop using SQOOP, PYSPARK and Stream Sets to meet the business user needs.
β’ Create mockup data, perform Unit testing and capture the result sets against the jobs developed in lower environment.
β’ Updating the production support Run book, Control M schedule document as per the production release.
β’ Create and update design documents, provide detail description about workflows after every production release.
β’ Continuously monitor the production data loads, fix the issues, update the tracker document with the issues, Identify the performance issues.
β’ Performance tuning long running ETL/ELT jobs by creating partitions, enabling full load and other standard approaches.
β’ Perform Quality assurance check, Reconciliation post data loads and communicate to vendor for receiving fixed data.
β’ Participate in ETL/ELT code review and design re-usable frameworks.
β’ Create Remedy/Service Now tickets to fix production issues, create Support Requests to deploy Database, Hadoop, Hive, Impala, UNIX, ETL/ELT and SAS code to UAT environment.
β’ Create Remedy/Service Now tickets and/or incidents to trigger Control M jobs for FTP and ETL/ELT jobs on ADHOC, daily, weekly, Monthly and quarterly basis as needed.
β’ Model and create STAGE / ODS /Data Warehouse Hive and Impala tables as and when needed.
β’ Create Change requests, workplan, Test results, BCAB checklist documents for the code deployment to production environment and perform the code validation post deployment.
β’ Work with Hadoop Admin, ETL and SAS admin teams for code deployments and health checks.
β’ Create re-usable UNIX shell scripts for file archival, file validations and Hadoop workflow looping.
β’ Create re-usable framework for Audit Balance Control to capture Reconciliation, mapping parameters and variables, serves as single point of reference for workflows.
β’ Create PySpark programs to ingest historical and incremental data.
β’ Create SQOOP scripts to ingest historical data from EDW Oracle database to Hadoop IOP, created HIVE tables and Impala views creation scripts for Dimension tables.
β’ Participate in meetings to continuously upgrade the Functional and technical expertise.
REQUIRED Skill Sets:
β’ 8+ years of experience with Big Data, Hadoop on Data Warehousing or Data Integration projects.
β’ Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Bigdata Technologies (with a minimum of 7 or more years' experience in Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Oracle SQL/PL-SQL, Unix commands and shell scripting;
β’ Strong development experience ( minimum of 7 or more years) in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling, hive/impala queries, Unix commands, scripting and shell scripting etc.
β’ Writing Hadoop/Hive/Impala scripts ( minimum of 7 or more years' experience) for gathering stats on table post data loads.
β’ Strong SQL experience (Oracle and Hadoop (Hive/Impala etc.)).
β’ Writing complex SQL queries and performed tuning based on the Hadoop/Hive/Impala explain plan results.
β’ Proven ability to write high quality code.
β’ Experience building datasets and familiarity with PHI and PIIdata.
β’ Expertise implementing complex ETL/ELT logic.
β’ Develop and enforce strong reconciliation process.
β’ Accountable for ETL/ELT design documentation.
β’ Good knowledge of Big Data, Hadoop, Hive, Impala database, data security and dimensional model design.
β’ Basic knowledge of UNIX/LINUX shell scripting.
β’ Utilize ETL/ELT standards and practices towards establishing and following centralized metadata repository.
β’ Good experience in working with Visio, Excel, PowerPoint, Word, etc.
β’ Effective communication, presentation and organizational skills.
β’ Familiar with Project Management methodologies like Waterfall and Agile
β’ Ability to establish priorities and follow through on projects, paying close attention to detail with minimal supervision.
β’ Required Education: BS/BA degree or combination of education & experience
DESIRED Skill Sets:
β’ Demonstrate effective leadership, analytical and problem-solving skills
β’ Required excellent written and oral communication skills with technical and business teams.
β’ Ability to work independently, as well as part of a team.
β’ Stay abreast of current technologies in area of IT assigned.
β’ Establish facts and draw valid conclusions.
β’ Recognize patterns and opportunities for improvement throughout the entire organization.
β’ Ability to discern critical from minor problems and innovate new solutions.
US Citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor at this time.
Dedicated Tech Services, Inc. is an Equal Opportunity Employer
Company and Role:
Dedicated Tech Services, Inc. (DTS) is an award-winning IT consulting firm based in Columbus, OH. We now have an opening for a Big Data ETL Specialist.
Highlights and Benefits:
β’ Onsite β Columbus, Ohio
β’ W2 hourly pay rate up to $58hr or salaried equivalent
β’ Government Environment
β’ Direct W2 hourly or salaried applicants only (no corp-to-corp subcontractors, third parties, or agencies)
β’ Paid time off and holidays for salaried employees
β’ 401K, billable bonus, and health, life, vision, dental and short-term disability insurance options for all
β’ DTS is a proud Women Business Enterprise (WBE) and Woman Owned Small Business (WOSB)!
β’ Check out our benefits and company information at www.dtsdelivers.com!
Job Description:
We are hiring an experienced Big Data ETL Specialist to work for us as our direct, W2 salaried or hourly employee to join our team. You will: Be responsible for Medicaid Enterprise Data Warehouse design, development, implementation, migration, maintenance and operation activities. Works closely with Data Governance and Analytics team. Will be one of the key technical resource for data warehouse projects for various Enterprise Data Warehouse projects and building critical Datamarts, data ingestion to Big Data platform for data analytics and exchange with State and Medicaid partners. This position is a member of Medicaid ITS and works closely with the Business Intelligence & Data Analytics team.
Responsibilities:
β’ Participate in Team activities, Design discussions, Stand up meetings and planning Review with team.
β’ Perform data analysis, data profiling, data quality and data ingestion in various layers using big data/Hadoop/Hive/Impala queries, PySpark programs and UNIX shell scripts.
β’ Follow the organization coding standard document, Create mappings, sessions and workflows as per the mapping specification document.
β’ Perform Gap and impact analysis of ETL and IOP jobs for the new requirement and enhancements.
β’ Create jobs in Hadoop using SQOOP, PYSPARK and Stream Sets to meet the business user needs.
β’ Create mockup data, perform Unit testing and capture the result sets against the jobs developed in lower environment.
β’ Updating the production support Run book, Control M schedule document as per the production release.
β’ Create and update design documents, provide detail description about workflows after every production release.
β’ Continuously monitor the production data loads, fix the issues, update the tracker document with the issues, Identify the performance issues.
β’ Performance tuning long running ETL/ELT jobs by creating partitions, enabling full load and other standard approaches.
β’ Perform Quality assurance check, Reconciliation post data loads and communicate to vendor for receiving fixed data.
β’ Participate in ETL/ELT code review and design re-usable frameworks.
β’ Create Remedy/Service Now tickets to fix production issues, create Support Requests to deploy Database, Hadoop, Hive, Impala, UNIX, ETL/ELT and SAS code to UAT environment.
β’ Create Remedy/Service Now tickets and/or incidents to trigger Control M jobs for FTP and ETL/ELT jobs on ADHOC, daily, weekly, Monthly and quarterly basis as needed.
β’ Model and create STAGE / ODS /Data Warehouse Hive and Impala tables as and when needed.
β’ Create Change requests, workplan, Test results, BCAB checklist documents for the code deployment to production environment and perform the code validation post deployment.
β’ Work with Hadoop Admin, ETL and SAS admin teams for code deployments and health checks.
β’ Create re-usable UNIX shell scripts for file archival, file validations and Hadoop workflow looping.
β’ Create re-usable framework for Audit Balance Control to capture Reconciliation, mapping parameters and variables, serves as single point of reference for workflows.
β’ Create PySpark programs to ingest historical and incremental data.
β’ Create SQOOP scripts to ingest historical data from EDW Oracle database to Hadoop IOP, created HIVE tables and Impala views creation scripts for Dimension tables.
β’ Participate in meetings to continuously upgrade the Functional and technical expertise.
REQUIRED Skill Sets:
β’ 8+ years of experience with Big Data, Hadoop on Data Warehousing or Data Integration projects.
β’ Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Bigdata Technologies (with a minimum of 7 or more years' experience in Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Oracle SQL/PL-SQL, Unix commands and shell scripting;
β’ Strong development experience ( minimum of 7 or more years) in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling, hive/impala queries, Unix commands, scripting and shell scripting etc.
β’ Writing Hadoop/Hive/Impala scripts ( minimum of 7 or more years' experience) for gathering stats on table post data loads.
β’ Strong SQL experience (Oracle and Hadoop (Hive/Impala etc.)).
β’ Writing complex SQL queries and performed tuning based on the Hadoop/Hive/Impala explain plan results.
β’ Proven ability to write high quality code.
β’ Experience building datasets and familiarity with PHI and PIIdata.
β’ Expertise implementing complex ETL/ELT logic.
β’ Develop and enforce strong reconciliation process.
β’ Accountable for ETL/ELT design documentation.
β’ Good knowledge of Big Data, Hadoop, Hive, Impala database, data security and dimensional model design.
β’ Basic knowledge of UNIX/LINUX shell scripting.
β’ Utilize ETL/ELT standards and practices towards establishing and following centralized metadata repository.
β’ Good experience in working with Visio, Excel, PowerPoint, Word, etc.
β’ Effective communication, presentation and organizational skills.
β’ Familiar with Project Management methodologies like Waterfall and Agile
β’ Ability to establish priorities and follow through on projects, paying close attention to detail with minimal supervision.
β’ Required Education: BS/BA degree or combination of education & experience
DESIRED Skill Sets:
β’ Demonstrate effective leadership, analytical and problem-solving skills
β’ Required excellent written and oral communication skills with technical and business teams.
β’ Ability to work independently, as well as part of a team.
β’ Stay abreast of current technologies in area of IT assigned.
β’ Establish facts and draw valid conclusions.
β’ Recognize patterns and opportunities for improvement throughout the entire organization.
β’ Ability to discern critical from minor problems and innovate new solutions.
US Citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor at this time.
Dedicated Tech Services, Inc. is an Equal Opportunity Employer