

Big Data Engineer - W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a 12-month contract, hybrid locations in Charlotte, NC, Columbus, OH, or Dallas, TX. Key skills include Hadoop, Spark, Python, and cloud platforms. Experience in data modeling and big data technologies is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 5, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Spark (Apache Spark) #SQL (Structured Query Language) #Unix #Data Modeling #NoSQL #Python #BigQuery #Cloud #Hadoop #Big Data #Kafka (Apache Kafka) #SQL Server #Scala #Azure #GitHub #Programming #Shell Scripting #Data Warehouse #HDFS (Hadoop Distributed File System) #PySpark #Data Engineering #GCP (Google Cloud Platform) #Scripting #Cloudera
Role description
Title:- Big Data Engineer
Client Location: Charlotte, NC, Columbus, OH, and Dallas, TX- Hybrid role (any of the locations)
Duration:- 12 months Contract
Spark Processing engine.
Big data tools/technologies/ Streaming (Hive, Kafka)
Data Modeling.
Experience analyzing data to discover opportunities and address gaps.
Experience working with cloud or on-prem Big Data platform(i.e. Google BigQuery, Azure Data Warehouse, or similar)
Programming experience in Python
Skills:
Hadoop, Hive, Spark, Cloudera, SQL, NoSQL, Python, CI/CD,
Python, Python Frameworks, GCP, SQL.
Experience with Hadoop Components including HDFS Spark Hive Scala Python PySpark and Mongo DB
Experience with SQL and SQL Server UNIX Shell Scripting and Github
Title:- Big Data Engineer
Client Location: Charlotte, NC, Columbus, OH, and Dallas, TX- Hybrid role (any of the locations)
Duration:- 12 months Contract
Spark Processing engine.
Big data tools/technologies/ Streaming (Hive, Kafka)
Data Modeling.
Experience analyzing data to discover opportunities and address gaps.
Experience working with cloud or on-prem Big Data platform(i.e. Google BigQuery, Azure Data Warehouse, or similar)
Programming experience in Python
Skills:
Hadoop, Hive, Spark, Cloudera, SQL, NoSQL, Python, CI/CD,
Python, Python Frameworks, GCP, SQL.
Experience with Hadoop Components including HDFS Spark Hive Scala Python PySpark and Mongo DB
Experience with SQL and SQL Server UNIX Shell Scripting and Github