
Lead Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with 12+ years of experience, local to Charlotte, NC. The contract is full-time, lasting over 6 months, with a pay rate of $125,186.34 - $140,000. Requires strong SQL, PySpark, and ETL framework skills.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
636.3636363636
-
๐๏ธ - Date discovered
September 29, 2025
๐ - Project duration
More than 6 months
-
๐๏ธ - Location type
Hybrid
-
๐ - Contract type
W2 Contractor
-
๐ - Security clearance
Unknown
-
๐ - Location detailed
Charlotte, NC 28202
-
๐ง - Skills detailed
#Grafana #Data Governance #SQL (Structured Query Language) #Data Extraction #Code Reviews #Data Pipeline #Kubernetes #Monitoring #"ETL (Extract #Transform #Load)" #Migration #Python #Java #Kafka (Apache Kafka) #Kanban #S3 (Amazon Simple Storage Service) #Compliance #Prometheus #Spark (Apache Spark) #Leadership #Scala #Jira #Docker #Agile #Scrum #Ab Initio #Batch #Data Engineering #Airflow #PySpark #Big Data #GIT
Role description
We are seeking a Lead Data Engineer with 12+ years of experience and proven mentorship and leadership skills to join our global partnerโs Counterparty Credit Risk organization. This role is local to Charlotte, NC (W-2 only) and offers the opportunity to modernize legacy systems, build high-volume data pipelines, and contribute to the development of a next-generation data platform.
Location: Charlotte, NC (Local candidates only)
Type: W-2 | Full-time
Client: Banking/Financial
Visa: USC, GC, and H-1b transfer only ( No OPT/CPT)
Required Skills:
10+ years of overall engineering experience in Data Engineer
5+ years of SQL engineering
3+ years with PySpark, Python, S3, Airflow, Iceberg, Parquet
Strong knowledge of ETL frameworks and columnar data formats (Parquet, ORC, AVRO)
Hands-on experience with CI/CD, Git, Kafka, Docker, Kubernetes
Expertise working in Agile (Scrum & Kanban) environments with Jira
Key Responsibilities:
Lead and mentor Agile teams in data extraction, ingestion, and transformation.
Drive migration from Ab Initio & legacy systems to modern big data platforms (PySpark, S3, Iceberg, Airflow).
Architect and manage pipelines supporting 300+ data feeds for nightly batch processing.
Develop scalable backend services using Python & PySpark.
Collaborate with stakeholders to ensure data governance, compliance, and platform reliability.
Balance BAU tasks while leading strategic modernization efforts.
Provide technical mentorship, conduct code reviews, and ensure best practices.
Preferred Qualifications:
Strong background with Ab Initio or other large-scale ETL frameworks
Familiarity with monitoring tools (Grafana, Prometheus)
Java development exposure is a plus
Excellent communication and leadership skills
Job Type: Contract
Pay: $125,186.34 - $140,000.00 per year
Benefits:
401(k)
Health insurance
Experience:
Data Engineer: 10 years (Required)
License/Certification:
Data Engineer certification (Required)
Ability to Commute:
Charlotte, NC 28202 (Required)
Ability to Relocate:
Charlotte, NC 28202: Relocate before starting work (Required)
Work Location: Hybrid remote in Charlotte, NC 28202
We are seeking a Lead Data Engineer with 12+ years of experience and proven mentorship and leadership skills to join our global partnerโs Counterparty Credit Risk organization. This role is local to Charlotte, NC (W-2 only) and offers the opportunity to modernize legacy systems, build high-volume data pipelines, and contribute to the development of a next-generation data platform.
Location: Charlotte, NC (Local candidates only)
Type: W-2 | Full-time
Client: Banking/Financial
Visa: USC, GC, and H-1b transfer only ( No OPT/CPT)
Required Skills:
10+ years of overall engineering experience in Data Engineer
5+ years of SQL engineering
3+ years with PySpark, Python, S3, Airflow, Iceberg, Parquet
Strong knowledge of ETL frameworks and columnar data formats (Parquet, ORC, AVRO)
Hands-on experience with CI/CD, Git, Kafka, Docker, Kubernetes
Expertise working in Agile (Scrum & Kanban) environments with Jira
Key Responsibilities:
Lead and mentor Agile teams in data extraction, ingestion, and transformation.
Drive migration from Ab Initio & legacy systems to modern big data platforms (PySpark, S3, Iceberg, Airflow).
Architect and manage pipelines supporting 300+ data feeds for nightly batch processing.
Develop scalable backend services using Python & PySpark.
Collaborate with stakeholders to ensure data governance, compliance, and platform reliability.
Balance BAU tasks while leading strategic modernization efforts.
Provide technical mentorship, conduct code reviews, and ensure best practices.
Preferred Qualifications:
Strong background with Ab Initio or other large-scale ETL frameworks
Familiarity with monitoring tools (Grafana, Prometheus)
Java development exposure is a plus
Excellent communication and leadership skills
Job Type: Contract
Pay: $125,186.34 - $140,000.00 per year
Benefits:
401(k)
Health insurance
Experience:
Data Engineer: 10 years (Required)
License/Certification:
Data Engineer certification (Required)
Ability to Commute:
Charlotte, NC 28202 (Required)
Ability to Relocate:
Charlotte, NC 28202: Relocate before starting work (Required)
Work Location: Hybrid remote in Charlotte, NC 28202