DataFactZ

Data Engineering Specialist (Confluent Kafka Exp)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineering Specialist contract position in Fort Mills, SC, requiring 10+ years in infrastructure engineering and 3–5 years in AWS data engineering. Key skills include Confluent Kafka, PySpark, Terraform, and Kubernetes. Preferred certifications: AWS Solutions Architect, CKAD, CKA.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Fort Mill, SC
-
🧠 - Skills detailed
#Spark (Apache Spark) #Azure #PySpark #Kafka (Apache Kafka) #Kubernetes #SQL Server #Automation #Redshift #Ansible #Artifactory #SQL (Structured Query Language) #Jenkins #Python #Airflow #Terraform #Data Engineering #Splunk #AI (Artificial Intelligence) #Deployment #Spark SQL #GitHub #Data Pipeline #AWS (Amazon Web Services) #Cloud #Bash #PostgreSQL #Dynatrace #Monitoring #Scala
Role description
Job Role: Data Engineering Specialist Location: Fort Mills, SC Hire-Type: Contract We’re looking for a seasoned engineer with deep expertise in Confluent Kafka, cloud automation, and AWS-based data pipelines. This role blends infrastructure engineering, event streaming, and hands-on data development to support large-scale, real-time systems across hybrid cloud environments. What You’ll Do • Design and manage scalable Confluent Kafka clusters and event streaming solutions. • Build and maintain data pipelines using PySpark, Spark SQL, Python, and AWS services. • Automate infrastructure using Terraform, Ansible, Bash, and cloud-native tooling. • Implement CI/CD pipelines (GitHub, Jenkins, Artifactory, Octopus, Harness). • Support Kubernetes/Rancher environments and cloud provisioning (AWS & Azure). • Collaborate with cross-functional teams to ensure smooth deployments and reliable data flows. What You Bring • 10+ years in infrastructure/middleware engineering; 3–5 years in AWS data engineering. • Deep expertise in Confluent Kafka administration and performance tuning. • Strong skills in PySpark, Spark SQL, Python, Terraform, Ansible, Bash. • Experience with PostgreSQL, Redshift, SQL Server, Glue, Airflow. • Knowledge of Kubernetes, Rancher, and monitoring tools (Splunk, ELK, Dynatrace). • Excellent communication, problem-solving, and collaboration skills. • Bachelor’s degree in CS/Engineering or equivalent experience. • Preferred: AWS Solutions Architect, CKAD, CKA certifications. • Familiarity with AI tools and modern engineering workflows.