

HorizonIT INC
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a hybrid Data Engineer position in Charlotte, NC, offering a contract length with a pay rate of "unknown." Key skills include AWS services, SQL, Python, and experience with data pipelines and ETL processes. A degree in Computer Science or related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 4, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Data Warehouse #Indexing #Apache Airflow #PySpark #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #Redshift #NoSQL #Compliance #Hadoop #Data Integrity #Athena #Big Data #Databases #Airflow #DevOps #S3 (Amazon Simple Storage Service) #Security #Apache Spark #Computer Science #Spark (Apache Spark) #Data Security #SQL (Structured Query Language) #Python #Lambda (AWS Lambda) #Data Pipeline #Docker #Data Science #Kubernetes #AWS (Amazon Web Services) #DynamoDB #AWS Glue #Data Governance #Data Modeling #ML (Machine Learning) #MongoDB #Data Lake #Data Engineering #Datasets #Consulting #Data Framework
Role description
Company Description
HorizonIT is a leading technology & staffing solutions company committed to delivering end-to-end excellence in IT talent acquisition, consulting, and project execution. With a strong foundation built upon over a decade of industry experience, HorizonIT partners closely with organizations to provide the right blend of expertise, innovation, and reliability that drives business growth and operational efficiency.
Role Description
This is a contract hybrid role for a Data Engineer located in Charlotte, NC. The Data Engineer will be responsible for designing, developing, and maintaining data pipelines and systems. Daily tasks will include data modeling, performing Extract Transform Load (ETL) processes, and managing data warehousing solutions. The Data Engineer will also conduct data analytics to support business decision-making and ensure data integrity and quality.
Responsibilities
β’ Design and implement data pipelines for ingesting, processing, and transforming large-scale datasets using AWS services.
β’ Develop and manage data lakes and data warehouses (e.g., Amazon S3, Redshift, Glue, Athena).
β’ Build and optimize ETL/ELT workflows using AWS Glue, Lambda, Step Functions, or Apache Airflow.
β’ Collaborate with analysts, data scientists, and business stakeholders to provide clean, reliable datasets.
β’ Ensure data security, governance, and compliance across AWS environments.
β’ Implement performance tuning, partitioning, and indexing strategies for large datasets.
β’ Automate data workflows and support CI/CD pipelines for data solutions.
β’ Monitor, troubleshoot, and optimize data pipelines for reliability and cost efficiency.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 3β7 years of professional experience as a Data Engineer or similar role.
β’ Strong hands-on expertise with AWS services such as:
β’ S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda, Step Functions
β’ Proficiency in SQL and Python (PySpark a plus).
β’ Experience with big data frameworks (Apache Spark, Hadoop).
β’ Knowledge of data modeling, warehousing, and OLAP concepts.
β’ Familiarity with DevOps and CI/CD pipelines on AWS (CodePipeline, CodeBuild, CodeDeploy).
β’ Strong understanding of data governance, security, and IAM policies.
Preferred Skills (Nice-to-Have)
β’ AWS Certified Data Analytics β Specialty or AWS Certified Solutions Architect.
β’ Experience with NoSQL databases (DynamoDB, MongoDB).
β’ Exposure to MLOps and machine learning pipeline integration.
β’ Experience with containerization & orchestration (Docker, Kubernetes, EKS).
Company Description
HorizonIT is a leading technology & staffing solutions company committed to delivering end-to-end excellence in IT talent acquisition, consulting, and project execution. With a strong foundation built upon over a decade of industry experience, HorizonIT partners closely with organizations to provide the right blend of expertise, innovation, and reliability that drives business growth and operational efficiency.
Role Description
This is a contract hybrid role for a Data Engineer located in Charlotte, NC. The Data Engineer will be responsible for designing, developing, and maintaining data pipelines and systems. Daily tasks will include data modeling, performing Extract Transform Load (ETL) processes, and managing data warehousing solutions. The Data Engineer will also conduct data analytics to support business decision-making and ensure data integrity and quality.
Responsibilities
β’ Design and implement data pipelines for ingesting, processing, and transforming large-scale datasets using AWS services.
β’ Develop and manage data lakes and data warehouses (e.g., Amazon S3, Redshift, Glue, Athena).
β’ Build and optimize ETL/ELT workflows using AWS Glue, Lambda, Step Functions, or Apache Airflow.
β’ Collaborate with analysts, data scientists, and business stakeholders to provide clean, reliable datasets.
β’ Ensure data security, governance, and compliance across AWS environments.
β’ Implement performance tuning, partitioning, and indexing strategies for large datasets.
β’ Automate data workflows and support CI/CD pipelines for data solutions.
β’ Monitor, troubleshoot, and optimize data pipelines for reliability and cost efficiency.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 3β7 years of professional experience as a Data Engineer or similar role.
β’ Strong hands-on expertise with AWS services such as:
β’ S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda, Step Functions
β’ Proficiency in SQL and Python (PySpark a plus).
β’ Experience with big data frameworks (Apache Spark, Hadoop).
β’ Knowledge of data modeling, warehousing, and OLAP concepts.
β’ Familiarity with DevOps and CI/CD pipelines on AWS (CodePipeline, CodeBuild, CodeDeploy).
β’ Strong understanding of data governance, security, and IAM policies.
Preferred Skills (Nice-to-Have)
β’ AWS Certified Data Analytics β Specialty or AWS Certified Solutions Architect.
β’ Experience with NoSQL databases (DynamoDB, MongoDB).
β’ Exposure to MLOps and machine learning pipeline integration.
β’ Experience with containerization & orchestration (Docker, Kubernetes, EKS).






