

Spark Tek Management Consulting LLC
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Fort Mill, SC, with a contract length of "unknown," offering a pay rate of "unknown." Candidates should have 13+ years of experience, expertise in Snowflake, Python, and ML, and familiarity with LLM frameworks.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 8, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Fort Mill, SC
-
🧠 - Skills detailed
#Data Management #GIT #SQL (Structured Query Language) #Observability #Data Governance #Docker #MLflow #Deployment #Model Deployment #Data Quality #Python #Metadata #Data Science #Scala #Langchain #AI (Artificial Intelligence) #SageMaker #Hugging Face #Automation #ML (Machine Learning) #AWS (Amazon Web Services) #Business Analysis #Version Control #Snowflake #Snowpark #TensorFlow #Data Pipeline #Data Engineering #"ETL (Extract #Transform #Load)" #Kubernetes
Role description
Job Title: Senior Data Engineer (Snowflake & ML)
Location: Fort Mill, SC
Experience Required: 13+ years
Role Overview
We are seeking a highly skilled Senior Data Engineer with extensive experience in Snowflake and Machine Learning (ML) to design, implement, and optimize data pipelines and intelligent automation solutions. The ideal candidate will have strong expertise in Snowflake, Python, and ML pipeline deployment, as well as hands-on exposure to emerging LLM frameworks and AI-driven automation.
Key Responsibilities
• Lead the design and development of scalable data pipelines and ETL/ELT workflows on Snowflake.
• Implement and optimize Snowflake architectures with Snowpark and advanced SQL techniques.
• Build, deploy, and manage end-to-end ML pipelines using frameworks such as scikit-learn, TensorFlow, MLflow, or SageMaker.
• Integrate and operationalize Large Language Models (LLMs) using frameworks like LangChain, Hugging Face, and OpenAI.
• Drive LLM-based automation leveraging AWS Bedrock + Claude Sonet for intelligent workflows.
• Ensure data quality, governance, and observability across platforms.
• Collaborate with cross-functional teams (data scientists, ML engineers, business analysts) to deliver enterprise-grade solutions.
• Implement CI/CD pipelines, containerization (Docker, Kubernetes), and Git-based workflows for deployment and version control.
Required Skills & Experience
• 13+ years of experience in data engineering, analytics, or data science roles.
• 10+ years in data engineering & pipeline development.
• Experience with 2–3 large-scale Snowflake implementations (including ML pipeline & model deployment).
• Strong expertise in Snowflake, Snowpark, Python, and SQL.
• Hands-on experience with ML tools (scikit-learn, TensorFlow, MLflow, SageMaker).
• Familiarity with LLM frameworks (LangChain, OpenAI, Hugging Face) and prompt engineering.
• Knowledge of data governance, metadata management, and observability tools.
• Proficiency in CI/CD, Git, Docker, Kubernetes.
Note: Only GC,USC or H4EAD are considered who are willing to work on W2 or 1099.
Job Title: Senior Data Engineer (Snowflake & ML)
Location: Fort Mill, SC
Experience Required: 13+ years
Role Overview
We are seeking a highly skilled Senior Data Engineer with extensive experience in Snowflake and Machine Learning (ML) to design, implement, and optimize data pipelines and intelligent automation solutions. The ideal candidate will have strong expertise in Snowflake, Python, and ML pipeline deployment, as well as hands-on exposure to emerging LLM frameworks and AI-driven automation.
Key Responsibilities
• Lead the design and development of scalable data pipelines and ETL/ELT workflows on Snowflake.
• Implement and optimize Snowflake architectures with Snowpark and advanced SQL techniques.
• Build, deploy, and manage end-to-end ML pipelines using frameworks such as scikit-learn, TensorFlow, MLflow, or SageMaker.
• Integrate and operationalize Large Language Models (LLMs) using frameworks like LangChain, Hugging Face, and OpenAI.
• Drive LLM-based automation leveraging AWS Bedrock + Claude Sonet for intelligent workflows.
• Ensure data quality, governance, and observability across platforms.
• Collaborate with cross-functional teams (data scientists, ML engineers, business analysts) to deliver enterprise-grade solutions.
• Implement CI/CD pipelines, containerization (Docker, Kubernetes), and Git-based workflows for deployment and version control.
Required Skills & Experience
• 13+ years of experience in data engineering, analytics, or data science roles.
• 10+ years in data engineering & pipeline development.
• Experience with 2–3 large-scale Snowflake implementations (including ML pipeline & model deployment).
• Strong expertise in Snowflake, Snowpark, Python, and SQL.
• Hands-on experience with ML tools (scikit-learn, TensorFlow, MLflow, SageMaker).
• Familiarity with LLM frameworks (LangChain, OpenAI, Hugging Face) and prompt engineering.
• Knowledge of data governance, metadata management, and observability tools.
• Proficiency in CI/CD, Git, Docker, Kubernetes.
Note: Only GC,USC or H4EAD are considered who are willing to work on W2 or 1099.





