

Data and MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data and MLOps Engineer on a contract basis, offering a remote work location and requiring a pay rate of "unknown." Candidates need 10+ years of data engineering experience, deep Azure expertise, and healthcare data compliance knowledge.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
July 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Remote
-
π§ - Skills detailed
#Data Science #Azure cloud #MLflow #Data Pipeline #AI (Artificial Intelligence) #Data Quality #Data Management #Datasets #Automation #Data Manipulation #Indexing #ML (Machine Learning) #AWS (Amazon Web Services) #Azure #Python #Storage #Kubernetes #Deployment #Scala #Security #Databases #Monitoring #Data Engineering #Compliance #Cloud #Docker #Model Deployment #FHIR (Fast Healthcare Interoperability Resources) #Data Storage #Data Integrity #Snowflake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Summary:
We are seeking a highly skilled Data and MLOps Engineer with deep technical expertise across cloud ecosystems, particularly within Azure. The ideal candidate has a strong foundation in MLOps, data pipeline architecture, and healthcare data standards. This role will be crucial in designing, implementing, and scaling machine learning infrastructure and data workflows across various environments.
Key Responsibilities:
Data Engineering & Pipeline Development
Design, develop, and maintain scalable data pipelines for processing large volumes of structured and unstructured data.
Implement best practices for data storage, retrieval, and access control to ensure data integrity, security, and compliance.
Proactively identify and resolve data quality issues and pipeline disruptions.
Data Management & Preprocessing
Preprocess and clean large datasets using Python, Azure tools, and other data manipulation technologies.
Work within Snowflake and Azure Storage (Blob, Postgres) for data analytics and management.
Search & Retrieval Systems
Apply in-depth knowledge of search algorithms, indexing techniques, and retrieval models.
Leverage tools like Azure AI Search and vector databases (e.g., Pinecone) to build effective information retrieval solutions.
Use chunking techniques and integrate vector-based search models.
MLOps & Model Deployment
Automate ML workflows via CI/CD pipelines for streamlined training, testing, validation, and deployment.
Deploy, monitor, and scale machine learning models using platforms such as Azure, Snowflake, and AWS.
Track model versions and performance using tools like MLflow, DVC, and Azure ML.
Healthcare Data Compliance
Ensure HIPAA and FHIR standards are met when working with PII/PHI data.
Apply data masking and governance practices for auditability and regulatory compliance.
Post-Deployment Monitoring
Monitor and log AI/ML systems post-deployment to ensure reliability and model integrity.
Track model drift and implement performance optimizations to maintain accuracy.
Collaboration & Communication
Collaborate with cross-functional teams including data scientists and product teams.
Translate complex technical concepts to non-technical stakeholders effectively.
Required Skills & Experience:
Minimum of 10 yearsβ experience as a data engineer.
Deep hands-on experience with the Azure Cloud ecosystem (AI Search, Postgres, Blob Storage, Function App).
Proficiency in Python for data manipulation and automation.
Strong experience handling unstructured data and healthcare-related datasets.
Deep understanding of vectors and vector databases like Pinecone.
Experience scaling POCs to production environments.
Familiarity with Document Intelligence, Snowflake, and model governance best practices.
Hands-on expertise with containerization tools like Docker and Kubernetes.
Proven knowledge of model monitoring, versioning, and explainability tools.
Job Type: Contract
Schedule:
8 hour shift
Day shift
Monday to Friday
Application Question(s):
Do you have a minimum of 10 yearsβ experience as a data engineer?
Do you have a deep hands-on experience with the Azure Cloud ecosystem? (AI Search, Postgres, Blob Storage, Function App).
Are you proficient in Python for data manipulation and automation?
Do you possess strong experience handling unstructured data and healthcare-related datasets?
Work Location: Remote