

Sr Data Engineer - Charlotte, NC (Hybrid and Locals) W2 Contract
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer in Charlotte, NC (Hybrid, local candidates only) with a 12-month contract. Required skills include PySpark, Google Dataproc, and Google BigQuery. Proficiency in Python, Scala, and SQL is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#PostgreSQL #Airflow #Data Engineering #Infrastructure as Code (IaC) #Data Ingestion #Data Aggregation #"ETL (Extract #Transform #Load)" #Deployment #Batch #Scala #Trino #Spark (Apache Spark) #MongoDB #Monitoring #Dataflow #Cloud #Python #SQL Queries #Data Pipeline #PySpark #SQL (Structured Query Language) #Hadoop #BigQuery #Kafka (Apache Kafka) #GCP (Google Cloud Platform)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Collaborate Solutions, Inc., is seeking the following. Apply via Dice today!
Title of Position: Sr Data Engineer
Location: Charlotte, NC Hybrid (2-3 days a week, local candidates only)
Duration: 12 months
Top Skills Required/Notes:
Required Skills (top 3 non-negotiables):
β’ PySpark
β’ Google Dataproc
β’ Google BigQuery
Nice to have:
β’ Airflow
β’ Scala
β’ Hadoop/Hive
Description:
β’ Convert product requirements into clear and actionable technical designs.
β’ Design, build, test, and optimize data pipelines (batch or real-time), while handling data ingestion, processing, and transformation tasks.
β’ Implement built-in and customized data monitoring and governance mechanisms.
β’ Identify and remediate problems related to data ingestion, transformation, quality, or performance.
β’ Write Infrastructure-as-Code (IaC) for deployments and Maintain CI/CD pipelines for data workflows.
Required Testing: Strong grasp of data structures and algorithms (arrays, hash maps, etc.), with proven ability in parsing and transforming data. Skilled in developing ETL-like scripts using Python, Scala on Spark environment and proficient in writing optimized, high-performance SQL queries that leverage data aggregation, window functions, and subqueries.
Software Skills Required:
Language- Python, Scala
Database - BigQuery, Hive, Trino, PostgreSQL, MongoDB etc.
Data Platforms - Spark, Airflow, Kafka, Google Cloud Platform (BigQuery, Dataflow).
Modelling and Transformation - ELT/ETL framework