

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a hybrid Data Engineer position in Charlotte, NC, for a contract duration. Pay rate is unspecified. Requires 3–7 years of experience, strong AWS skills, proficiency in SQL and Python, and knowledge of data governance and ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 20, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Datasets #MongoDB #AWS Glue #"ETL (Extract #Transform #Load)" #Data Framework #Data Integrity #Athena #Data Lake #Docker #Databases #SQL (Structured Query Language) #Data Security #S3 (Amazon Simple Storage Service) #Hadoop #Spark (Apache Spark) #Data Modeling #Python #DevOps #Data Warehouse #PySpark #AWS (Amazon Web Services) #ML (Machine Learning) #Compliance #Indexing #NoSQL #Data Engineering #Apache Spark #DynamoDB #Kubernetes #Apache Airflow #Consulting #IAM (Identity and Access Management) #Computer Science #Redshift #Security #Big Data #Data Science #Data Governance #Airflow #Data Pipeline #Lambda (AWS Lambda)
Role description
Company Description
HorizonIT is a leading technology & staffing solutions company committed to delivering end-to-end excellence in IT talent acquisition, consulting, and project execution. With a strong foundation built upon over a decade of industry experience, HorizonIT partners closely with organizations to provide the right blend of expertise, innovation, and reliability that drives business growth and operational efficiency.
Role Description
This is a contract hybrid role for a Data Engineer located in Charlotte, NC. The Data Engineer will be responsible for designing, developing, and maintaining data pipelines and systems. Daily tasks will include data modeling, performing Extract Transform Load (ETL) processes, and managing data warehousing solutions. The Data Engineer will also conduct data analytics to support business decision-making and ensure data integrity and quality.
Responsibilities
• Design and implement data pipelines for ingesting, processing, and transforming large-scale datasets using AWS services.
• Develop and manage data lakes and data warehouses (e.g., Amazon S3, Redshift, Glue, Athena).
• Build and optimize ETL/ELT workflows using AWS Glue, Lambda, Step Functions, or Apache Airflow.
• Collaborate with analysts, data scientists, and business stakeholders to provide clean, reliable datasets.
• Ensure data security, governance, and compliance across AWS environments.
• Implement performance tuning, partitioning, and indexing strategies for large datasets.
• Automate data workflows and support CI/CD pipelines for data solutions.
• Monitor, troubleshoot, and optimize data pipelines for reliability and cost efficiency.
Qualifications
• Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field.
• 3–7 years of professional experience as a Data Engineer or similar role.
• Strong hands-on expertise with AWS services such as:
• S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda, Step Functions
• Proficiency in SQL and Python (PySpark a plus).
• Experience with big data frameworks (Apache Spark, Hadoop).
• Knowledge of data modeling, warehousing, and OLAP concepts.
• Familiarity with DevOps and CI/CD pipelines on AWS (CodePipeline, CodeBuild, CodeDeploy).
• Strong understanding of data governance, security, and IAM policies.
Preferred Skills (Nice-to-Have)
• AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.
• Experience with NoSQL databases (DynamoDB, MongoDB).
• Exposure to MLOps and machine learning pipeline integration.
• Experience with containerization & orchestration (Docker, Kubernetes, EKS).
Company Description
HorizonIT is a leading technology & staffing solutions company committed to delivering end-to-end excellence in IT talent acquisition, consulting, and project execution. With a strong foundation built upon over a decade of industry experience, HorizonIT partners closely with organizations to provide the right blend of expertise, innovation, and reliability that drives business growth and operational efficiency.
Role Description
This is a contract hybrid role for a Data Engineer located in Charlotte, NC. The Data Engineer will be responsible for designing, developing, and maintaining data pipelines and systems. Daily tasks will include data modeling, performing Extract Transform Load (ETL) processes, and managing data warehousing solutions. The Data Engineer will also conduct data analytics to support business decision-making and ensure data integrity and quality.
Responsibilities
• Design and implement data pipelines for ingesting, processing, and transforming large-scale datasets using AWS services.
• Develop and manage data lakes and data warehouses (e.g., Amazon S3, Redshift, Glue, Athena).
• Build and optimize ETL/ELT workflows using AWS Glue, Lambda, Step Functions, or Apache Airflow.
• Collaborate with analysts, data scientists, and business stakeholders to provide clean, reliable datasets.
• Ensure data security, governance, and compliance across AWS environments.
• Implement performance tuning, partitioning, and indexing strategies for large datasets.
• Automate data workflows and support CI/CD pipelines for data solutions.
• Monitor, troubleshoot, and optimize data pipelines for reliability and cost efficiency.
Qualifications
• Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field.
• 3–7 years of professional experience as a Data Engineer or similar role.
• Strong hands-on expertise with AWS services such as:
• S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda, Step Functions
• Proficiency in SQL and Python (PySpark a plus).
• Experience with big data frameworks (Apache Spark, Hadoop).
• Knowledge of data modeling, warehousing, and OLAP concepts.
• Familiarity with DevOps and CI/CD pipelines on AWS (CodePipeline, CodeBuild, CodeDeploy).
• Strong understanding of data governance, security, and IAM policies.
Preferred Skills (Nice-to-Have)
• AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.
• Experience with NoSQL databases (DynamoDB, MongoDB).
• Exposure to MLOps and machine learning pipeline integration.
• Experience with containerization & orchestration (Docker, Kubernetes, EKS).