

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer position based in Charlotte, NC, for a 6-month contract with a pay rate of "TBD." Key skills include Amazon Redshift, S3, Apache Parquet, and data lake architectures. Requires 4+ years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte Metro
-
π§ - Skills detailed
#SQL (Structured Query Language) #Redshift #Apache Airflow #Cloud #Security #Terraform #S3 (Amazon Simple Storage Service) #Amazon Redshift #Data Science #Apache Iceberg #Scripting #Data Storage #Data Governance #Bash #Storage #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Version Control #Data Engineering #Data Modeling #Data Lake #Python #Data Warehouse #Scala #Airflow #Data Pipeline #AWS (Amazon Web Services) #GIT #AI (Artificial Intelligence)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer III
Location: Charlotte, NC
Job Type: 6-month contract (potential for extension)
Position Summary
We are seeking a skilled Data Engineer to join our growing Data Applications & Integration team. This role is focused on building and optimizing data pipelines, managing large-scale data storage solutions, and supporting the development of a modern data warehouse architecture. The ideal candidate will have experience working with Amazon Redshift, Apache Parquet, S3-based data lakes, and Apache Iceberg. Familiarity with AI/ML workflows is a plus.
Key Responsibilities
β’ Design, build, and maintain scalable data pipelines and ETL processes.
β’ Develop and manage data lake architectures using Amazon S3 and Apache Iceberg.
β’ Work with Apache Parquet file formats to optimize data storage and retrieval.
β’ Build and maintain data warehouse solutions using Amazon Redshift.
β’ Collaborate with data scientists, analysts, and software engineers to ensure data availability and quality.
β’ Implement data governance, security, and quality best practices.
β’ Monitor and troubleshoot data workflows and performance issues.
Required Qualifications
β’ 4+ years of experience in data engineering or a related field.
β’ Strong experience with Amazon Redshift, S3, and Apache Parquet.
β’ Hands-on experience with data lake architectures and Apache Iceberg.
β’ Proficiency in SQL and scripting languages such as Python or Bash.
β’ Familiarity with data modeling, warehousing concepts, and performance tuning.
β’ Experience with version control systems (e.g., Git) and CI/CD pipelines.
Preferred Qualifications
β’ Exposure to AI/ML workflows or experience supporting data science teams.
β’ Experience with orchestration tools like Apache Airflow or AWS Step Functions.
β’ Knowledge of infrastructure-as-code tools (e.g., Terraform, CloudFormation).
β’ Utilities or energy industry experience is a plus.