

ISITE TECHNOLOGIES
Senior Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Machine Learning Engineer in NYC, with a contract length of "unknown" and a pay rate of "unknown." Candidates should have 10 years of experience, strong Python skills, and expertise in PyTorch, Spark, and cloud environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 28, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#C++ #Programming #Model Validation #Data Processing #Azure #PyTorch #Migration #R #Data Architecture #Data Warehouse #DevOps #Deployment #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Documentation #Distributed Computing #Version Control #Cloud #Scala #AWS (Amazon Web Services) #Deep Learning #Airflow #SQL (Structured Query Language) #ML (Machine Learning) #Snowflake #Data Pipeline #GCP (Google Cloud Platform) #Compliance #Python #Data Science #Databricks
Role description
Job Role: Senior Machine learning Engineer/Data scientist
Job Location: NYC
Experience: 10years
Job Description:
Machine Learning Engineering
• Design, develop, and deploy scalable machine learning models using modern frameworks (e.g., PyTorch)
• Re-engineer and optimize legacy models into efficient, production-grade implementations
• Improve model performance, scalability, and reproducibility
• Support model validation, benchmarking, and certification processes
• Ensure full traceability and documentation of model logic and outputs
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Data Platform & Pipeline Engineering
• Design and optimize distributed data pipelines using Spark-based platforms (e.g., Databricks)
• Build and refactor ETL/ELT workflows for performance and scalability
• Implement data models within modern cloud data warehouses (e.g., Snowflake)
• Apply best practices for cloud-native data architecture
• Standardize reusable utilities and frameworks for analytics workflows
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Cloud Migration & Modernization
• Participate in migration of on-prem or legacy analytics platforms to cloud ecosystems
• Refactor existing codebases to align with modern engineering and DevOps standards
• Leverage cloud compute capabilities (including GPU acceleration where applicable)
• Support scheduling and orchestration of data and ML workflows
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Testing, Validation & Governance
• Conduct rigorous testing and validation to ensure data and model accuracy
• Perform parallel runs and benchmarking when modernizing systems
• Collaborate with governance, risk, and compliance stakeholders
• Maintain high standards of documentation and reproducibility
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Required Qualifications
Technical Skills
• Strong programming skills in Python
• Hands-on experience with PyTorch (or similar deep learning frameworks)
• Expertise in Spark-based data processing (Databricks preferred)
• Strong SQL skills
• Experience working with cloud data warehouses such as Snowflake
• Experience building and optimizing ETL/ELT pipelines
• Familiarity with distributed computing and performance tuning
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Cloud & DevOps
• Experience working in cloud environments (AWS, Azure, or GCP)
• Understanding of workflow orchestration tools (e.g., Airflow, native platform schedulers)
• Version control and CI/CD practices for ML pipelines
• Exposure to containerization and scalable deployment patterns
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Preferred Qualifications
• Experience modernizing legacy codebases (C++, R, or similar)
• Experience in regulated industries (Financial Services, Banking, Insurance, etc.)
• GPU optimization experience
• Knowledge of model risk management or model validation frameworks
• Experience supporting large-scale data transformation initiatives
Job Role: Senior Machine learning Engineer/Data scientist
Job Location: NYC
Experience: 10years
Job Description:
Machine Learning Engineering
• Design, develop, and deploy scalable machine learning models using modern frameworks (e.g., PyTorch)
• Re-engineer and optimize legacy models into efficient, production-grade implementations
• Improve model performance, scalability, and reproducibility
• Support model validation, benchmarking, and certification processes
• Ensure full traceability and documentation of model logic and outputs
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Data Platform & Pipeline Engineering
• Design and optimize distributed data pipelines using Spark-based platforms (e.g., Databricks)
• Build and refactor ETL/ELT workflows for performance and scalability
• Implement data models within modern cloud data warehouses (e.g., Snowflake)
• Apply best practices for cloud-native data architecture
• Standardize reusable utilities and frameworks for analytics workflows
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Cloud Migration & Modernization
• Participate in migration of on-prem or legacy analytics platforms to cloud ecosystems
• Refactor existing codebases to align with modern engineering and DevOps standards
• Leverage cloud compute capabilities (including GPU acceleration where applicable)
• Support scheduling and orchestration of data and ML workflows
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
🔹 Testing, Validation & Governance
• Conduct rigorous testing and validation to ensure data and model accuracy
• Perform parallel runs and benchmarking when modernizing systems
• Collaborate with governance, risk, and compliance stakeholders
• Maintain high standards of documentation and reproducibility
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Required Qualifications
Technical Skills
• Strong programming skills in Python
• Hands-on experience with PyTorch (or similar deep learning frameworks)
• Expertise in Spark-based data processing (Databricks preferred)
• Strong SQL skills
• Experience working with cloud data warehouses such as Snowflake
• Experience building and optimizing ETL/ELT pipelines
• Familiarity with distributed computing and performance tuning
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Cloud & DevOps
• Experience working in cloud environments (AWS, Azure, or GCP)
• Understanding of workflow orchestration tools (e.g., Airflow, native platform schedulers)
• Version control and CI/CD practices for ML pipelines
• Exposure to containerization and scalable deployment patterns
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Preferred Qualifications
• Experience modernizing legacy codebases (C++, R, or similar)
• Experience in regulated industries (Financial Services, Banking, Insurance, etc.)
• GPU optimization experience
• Knowledge of model risk management or model validation frameworks
• Experience supporting large-scale data transformation initiatives






