

Charter Global
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Python, PySpark, Databricks, and Snowflake. It is a hybrid position based in Manhattan, NY, lasting 6-12+ months, requiring expertise in scalable data pipelines and ML workflows.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 27, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Manhattan, NY
-
🧠 - Skills detailed
#Data Pipeline #MLflow #Monitoring #Data Quality #Scala #Azure #Cloud #PySpark #Spark (Apache Spark) #ML (Machine Learning) #Azure Databricks #Snowflake #AI (Artificial Intelligence) #Automation #Data Modeling #Databricks #Data Engineering #Data Science #Python #"ETL (Extract #Transform #Load)"
Role description
Title: Data Engineer (Need NY or NJ Locals only)
Location: Manhattan, NY (Candidates should be NY/NJ locals)
Duration: 6-12+ months+
Notes:
Number of openings: 1 - Location/ Travel – Onsite Requirements: Candidates should be NY/NJ locals. Role is hybrid with 2-3 days per week in Manhattan. First week is fully onsite in Manhattan.
Contract description:
• Design, build, and maintain scalable production data pipelines for ingestion, transformation, and modeling across cloud data platforms (Azure, Databricks, Snowflake).
• Develop solutions using Python/PySpark, Databricks, and Snowflake, ensuring performance, reliability, and cost‑efficient Spark execution.
• Implement orchestration workflows, data quality frameworks, monitoring/alerting, and CI/CD pipelines to support automated, high‑quality data delivery.
• Collaborate with data scientists to operationalize machine learning models using MLflow and MLOps best practices.
• Partner closely with cross‑functional stakeholders to understand requirements, communicate progress, and deliver high‑impact data solutions.
Qualifications
• 5+ years of professional experience with Python, PySpark, Databricks, and Snowflake in production data environments.
• Proven expertise in building scalable data pipelines, including ingestion frameworks, ETL/ELT processes, and data modeling.
• Deep experience with Spark performance tuning and cloud cost‑optimization strategies.
• Hands‑on experience with workflow orchestration, data quality frameworks, monitoring/alerting systems, and CI/CD automation.
• Exposure to AI/LLM technologies and experience supporting ML workloads using MLflow or related MLOps tools.
Title: Data Engineer (Need NY or NJ Locals only)
Location: Manhattan, NY (Candidates should be NY/NJ locals)
Duration: 6-12+ months+
Notes:
Number of openings: 1 - Location/ Travel – Onsite Requirements: Candidates should be NY/NJ locals. Role is hybrid with 2-3 days per week in Manhattan. First week is fully onsite in Manhattan.
Contract description:
• Design, build, and maintain scalable production data pipelines for ingestion, transformation, and modeling across cloud data platforms (Azure, Databricks, Snowflake).
• Develop solutions using Python/PySpark, Databricks, and Snowflake, ensuring performance, reliability, and cost‑efficient Spark execution.
• Implement orchestration workflows, data quality frameworks, monitoring/alerting, and CI/CD pipelines to support automated, high‑quality data delivery.
• Collaborate with data scientists to operationalize machine learning models using MLflow and MLOps best practices.
• Partner closely with cross‑functional stakeholders to understand requirements, communicate progress, and deliver high‑impact data solutions.
Qualifications
• 5+ years of professional experience with Python, PySpark, Databricks, and Snowflake in production data environments.
• Proven expertise in building scalable data pipelines, including ingestion frameworks, ETL/ELT processes, and data modeling.
• Deep experience with Spark performance tuning and cloud cost‑optimization strategies.
• Hands‑on experience with workflow orchestration, data quality frameworks, monitoring/alerting systems, and CI/CD automation.
• Exposure to AI/LLM technologies and experience supporting ML workloads using MLflow or related MLOps tools.






