

PATHSDATA
Freelance Data Engineer (AI & ML Focus) – US / Canada Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Freelance Data Engineer (AI & ML Focus) based in the US or Canada, with no minimum hours initially. Key skills include Data Engineering, MCP, Python, SQL, and cloud platforms. Experience with AI/ML data pipelines is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 28, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Scala #Kafka (Apache Kafka) #Redshift #BigQuery #Cloud #AI (Artificial Intelligence) #Big Data #Forecasting #Data Lake #Databricks #PySpark #Deployment #ML (Machine Learning) #GCP (Google Cloud Platform) #NLP (Natural Language Processing) #Batch #Airflow #"ETL (Extract #Transform #Load)" #Data Quality #Azure #Monitoring #SQL (Structured Query Language) #Python #Snowflake #Spark (Apache Spark) #Data Pipeline #Model Deployment #AWS (Amazon Web Services) #Data Engineering #Data Warehouse
Role description
We are looking for an experienced Data Engineer based in the United States or Canada with deep expertise in Data Engineering, AI, and Machine Learning, and strong hands-on experience with MCP, to support ongoing and upcoming projects.
This engagement will start with no minimum hours, allowing flexibility on both sides, with the potential to scale hours over time based on project needs and performance.
Responsibilities
• Design and maintain scalable ETL/ELT data pipelines
• Build and optimize data lakes and data warehouses
• Develop and manage MCP-based data and AI workflows
• Prepare, validate, and manage data for AI/ML model training and deployment
• Implement feature engineering and data quality checks
• Support MLOps pipelines, model deployment, and monitoring
• Work with large-scale batch and real-time data systems
Required Skills
• Strong experience in Data Engineering (Python, SQL, PySpark)
• Proven hands-on experience with MCP
• Solid understanding of AI/ML data pipelines and workflows
• Cloud platforms: AWS, GCP, or Azure
• Big data tools: Spark, Databricks, Kafka, Airflow
• Data warehouses: Snowflake, BigQuery, Redshift
• Experience building production-grade, scalable data systems
Nice to Have
• Experience with NLP, forecasting, recommendation systems
• Familiarity with CI/CD, MLOps, and model monitoring
• Prior experience working with US or Canadian clients
We are looking for an experienced Data Engineer based in the United States or Canada with deep expertise in Data Engineering, AI, and Machine Learning, and strong hands-on experience with MCP, to support ongoing and upcoming projects.
This engagement will start with no minimum hours, allowing flexibility on both sides, with the potential to scale hours over time based on project needs and performance.
Responsibilities
• Design and maintain scalable ETL/ELT data pipelines
• Build and optimize data lakes and data warehouses
• Develop and manage MCP-based data and AI workflows
• Prepare, validate, and manage data for AI/ML model training and deployment
• Implement feature engineering and data quality checks
• Support MLOps pipelines, model deployment, and monitoring
• Work with large-scale batch and real-time data systems
Required Skills
• Strong experience in Data Engineering (Python, SQL, PySpark)
• Proven hands-on experience with MCP
• Solid understanding of AI/ML data pipelines and workflows
• Cloud platforms: AWS, GCP, or Azure
• Big data tools: Spark, Databricks, Kafka, Airflow
• Data warehouses: Snowflake, BigQuery, Redshift
• Experience building production-grade, scalable data systems
Nice to Have
• Experience with NLP, forecasting, recommendation systems
• Familiarity with CI/CD, MLOps, and model monitoring
• Prior experience working with US or Canadian clients






