Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Dallas, TX or Charlotte, NC, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, Java, Apache Spark, AWS, and experience in machine learning pipelines.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 14, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Data Quality #Terraform #Lambda (AWS Lambda) #Kubernetes #MongoDB #PostgreSQL #Scala #AWS (Amazon Web Services) #Deployment #Security #TensorFlow #Data Engineering #Data Pipeline #Java #Python #Spark (Apache Spark) #Tableau #GCP (Google Cloud Platform) #Snowflake #AI (Artificial Intelligence) #PySpark #Kafka (Apache Kafka) #Docker #Model Deployment #Databricks #R #ML (Machine Learning) #Compliance #Cloud #Spring Boot #AWS EC2 (Amazon Elastic Compute Cloud) #PyTorch #Apache Spark #Leadership #GIT #Programming #SQL (Structured Query Language) #EC2 #Databases #Data Science #Aurora #Jenkins
Role description
Role: Sr. Data Engineer Location: Dallas, TX / Charlotte, NC (Onsite) Required Skills: β€’ Strong programming skills in Python, Java, Scala. β€’ Expertise in Apache Spark, Kafka, PySpark, and Databricks. β€’ Hands-on experience with AWS (EC2, EMR, Lambda, Fargate, Aurora) and/or GCP. β€’ Proficiency in SQL and working with databases like Snowflake, PostgreSQL, MongoDB. β€’ Experience with CI/CD tools (Jenkins, Git), Docker, and Kubernetes. β€’ Familiarity with data science tools (TensorFlow, PyTorch, Scikit-learn) is a plus. β€’ Strong understanding of cloud security, compliance, and cost optimization. β€’ Experience with machine learning pipelines and AI model deployment. β€’ Prior experience in building internal frameworks or tools for data quality and validation. β€’ Contributions to open-source projects or personal ML/data engineering projects. Responsibilities: β€’ Design and develop scalable data pipelines using PySpark, Scala, and Apache Spark. β€’ Build and maintain real-time streaming applications using Kafka. β€’ Develop and deploy APIs using Spring Boot for data validation and integration. β€’ Engineer reusable modules for data quality, schema validation, and deduplication. β€’ Migrate and modernize legacy systems to AWS Fargate, Lambda, and Aurora. β€’ Automate infrastructure provisioning using Terraform and manage CI/CD pipelines with Jenkins. β€’ Implement secure data handling practices including tokenization and secret management. β€’ Monitor and troubleshoot production systems, ensuring high availability and performance. β€’ Collaborate with cross-functional teams to support data onboarding, application enhancements, and operational support. β€’ Create dashboards and reports using Tableau for operational visibility and leadership reporting. Regards Praveen Kumar Talent Acquisition Group – Strategic Recruitment Manager praveen.r@themesoft.com| Themesoft Inc