

Sr. Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer in Charlotte, NC (Hybrid) with a contract length of unspecified duration. Pay rate is competitive. Key skills include SQL, Python, PySpark, and experience with legacy systems and data lake pipelines. Face-to-face interview required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Airflow #Data Engineering #Data Migration #SSIS (SQL Server Integration Services) #Spark (Apache Spark) #Documentation #Migration #Apache Kafka #Data Lake #Java #Liquibase #Python #SQL (Structured Query Language) #PySpark #Data Governance #Dremio #C# #Kafka (Apache Kafka) #Security
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Location: Charlotte, NC (Hybrid)-
Candidate Must be Need to be local in Charlotte NC
IMP Note: FACE TO FACE Interview required in Charlotte NC
Job Description
We are seeking a skilled and experienced Data Engineer to join our dynamic team in Charlotte, NC.
The ideal candidate will have a strong background in data engineering, with hands-on experience working with legacy systems and building robust data lake pipelines.
This hybrid role offers an exciting opportunity to work on modern data infrastructure while maintaining and integrating legacy data systems.
Key Responsibilities
Legacy Pipeline (Immediate ramp-up):
SQL, Liquibase, SSIS, TFS, AutoSys
New Data Lake Pipeline (Upcoming Project)
Hive, Dremio, Python, PySpark, Airflow, Data Lake Architecture
Ideal Candidate Should Have
Strong hands-on experience in some or most of the above technologies
Excellent Documentation Skills
Ability to read and understand compiled codebases (Java, C#); ability to write in these languages is a plus
A programmerβs mindset to solve complex data and integration challenges
Preferred Skills
β’ Experience with data migration and refactoring of legacy workflows.
β’ Familiarity with Apache Kafka, Airflow, or other orchestration tools.
β’ Knowledge of data governance and security best practices