

Cyber Space Technologies LLC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Columbus, OH, focusing on Python, Spark, and AWS. Contract length is unspecified, with a pay rate on W2 only. Key skills include ETL, SQL, and AI/ML data exposure; industry experience is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Columbus, OH
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #SQL (Structured Query Language) #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Science #Data Pipeline #Python #Apache Spark #AWS (Amazon Web Services) #Data Engineering #ML (Machine Learning) #Datasets #Scala #PySpark
Role description
Job Title: Data Engineer (Python, Spark, AWS)
Location: Columbus. OH
NEED CANDIDATES ON OUR W2 only. THIS IS NOT A C2C role.
Job Description:
We are seeking a Data Engineer to build and maintain scalable data pipelines using Python, Apache Spark, and AWS. The role involves working with large datasets, optimizing ETL processes, and supporting AI/ML initiatives by enabling reliable data and feature pipelines. You will collaborate closely with data scientists and analytics teams to deliver high-quality, production-ready data solutions.
Key Skills: Python, Spark (PySpark), AWS, SQL, ETL, AI/ML data exposure
Job Title: Data Engineer (Python, Spark, AWS)
Location: Columbus. OH
NEED CANDIDATES ON OUR W2 only. THIS IS NOT A C2C role.
Job Description:
We are seeking a Data Engineer to build and maintain scalable data pipelines using Python, Apache Spark, and AWS. The role involves working with large datasets, optimizing ETL processes, and supporting AI/ML initiatives by enabling reliable data and feature pipelines. You will collaborate closely with data scientists and analytics teams to deliver high-quality, production-ready data solutions.
Key Skills: Python, Spark (PySpark), AWS, SQL, ETL, AI/ML data exposure






