

Ampstek
Sr. DataOps Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. DataOps Engineer with 8+ years of experience in CloudOps or DataOps, focusing on AWS and Databricks automation. Contract length is unspecified, with a hybrid work location in Duluth, GA. AWS certifications preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Georgia, United States
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Data Modeling #Observability #Terraform #Lambda (AWS Lambda) #AWS (Amazon Web Services) #OpenSearch #Disaster Recovery #Cloud #Hugging Face #S3 (Amazon Simple Storage Service) #IAM (Identity and Access Management) #Data Engineering #RDS (Amazon Relational Database Service) #DevOps #GitHub #Azure #Athena #Logging #Redshift #Kubernetes #AI (Artificial Intelligence) #AWS CloudWatch #Storage #DataOps #Bash #GitLab #AutoScaling #Automation #Databricks #Python #EC2 #Scripting #SageMaker #Security #Scala #Agile #Compliance #GDPR (General Data Protection Regulation) #Data Pipeline #"ETL (Extract #Transform #Load)" #Deployment #VPC (Virtual Private Cloud) #DevSecOps #Langchain #Data Science #ML (Machine Learning) #Monitoring #Data Governance
Role description
Sr. DataOps Engineer
Onsite (Hybrid, Duluth, GA)
Contractor
Experience: 8 years in CloudOps or DataOps
Primary Focus: Automation in AWS, Databricks,
Job Description:
We are seeking an experienced Senior DataOps Engineer to join our team. This candidate will have a strong background in DevOps, DataOps, or Cloud Engineering practices, with extensive experience in automating the CICD pipelines and modern data stack technologies.
Key Responsibilities:
• Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks.
• Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation.
• Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints.
• Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling.
• Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines.
• Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments.
• Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations.
• Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers.
• Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies.
• Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage.
• Automate AWS housekeeping and operational tasks such as:
• Cleanup of unused EBS Volumes, snapshots, old AMIs
• Rotation of secrets and credentials using secrets manager
• Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups
• Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages.
• Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements.
Required Skills and Experience:
• 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration.
• Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups.
• Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing.
• Deep hands-on experience with AWS Services, including
o Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC
o Data Services: Athena, Glue, MSK, Redshift
o Security: KMS, IAM, Config, CloudTrail, Secrets Manager
o Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform
o Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless
• Working knowledge of Databricks, including:
o Cluster and workspace management, job orchestration
o Integration with AWS Storage and identity (IAM passthrough)
• Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline.
• Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup.
• Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services.
• Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management.
• Knowledge of cost optimization strategies across compute, storage, and network layers.
• Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR)
• Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.)
Preferred Qualifications:
• AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications.
• Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS.
• Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards.
• Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities.
• Prior experience in supporting high-availability production environments with disaster recovery and failover architectures.
• Understanding of Zero Trust architecture and security best practices in cloud-native environments.
• Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel.
• Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance.
• Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams.
About Ampstek
Ampstek is a global IT solutions partner serving clients across North America, Europe, APAC, LATAM, and MEA. We specialize in delivering talent and technology solutions for enterprise-level digital transformation, trading systems, data services, and regulatory compliance.
Contact:
Snehil Mishra
📧 snehil@ampstek.com
📞 Desk: 609-360-2673 Ext. 125
🔗 LinkedIn
🌐 www.ampstek.com
Sr. DataOps Engineer
Onsite (Hybrid, Duluth, GA)
Contractor
Experience: 8 years in CloudOps or DataOps
Primary Focus: Automation in AWS, Databricks,
Job Description:
We are seeking an experienced Senior DataOps Engineer to join our team. This candidate will have a strong background in DevOps, DataOps, or Cloud Engineering practices, with extensive experience in automating the CICD pipelines and modern data stack technologies.
Key Responsibilities:
• Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks.
• Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation.
• Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints.
• Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling.
• Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines.
• Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments.
• Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations.
• Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers.
• Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies.
• Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage.
• Automate AWS housekeeping and operational tasks such as:
• Cleanup of unused EBS Volumes, snapshots, old AMIs
• Rotation of secrets and credentials using secrets manager
• Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups
• Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages.
• Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements.
Required Skills and Experience:
• 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration.
• Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups.
• Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing.
• Deep hands-on experience with AWS Services, including
o Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC
o Data Services: Athena, Glue, MSK, Redshift
o Security: KMS, IAM, Config, CloudTrail, Secrets Manager
o Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform
o Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless
• Working knowledge of Databricks, including:
o Cluster and workspace management, job orchestration
o Integration with AWS Storage and identity (IAM passthrough)
• Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline.
• Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup.
• Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services.
• Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management.
• Knowledge of cost optimization strategies across compute, storage, and network layers.
• Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR)
• Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.)
Preferred Qualifications:
• AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications.
• Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS.
• Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards.
• Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities.
• Prior experience in supporting high-availability production environments with disaster recovery and failover architectures.
• Understanding of Zero Trust architecture and security best practices in cloud-native environments.
• Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel.
• Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance.
• Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams.
About Ampstek
Ampstek is a global IT solutions partner serving clients across North America, Europe, APAC, LATAM, and MEA. We specialize in delivering talent and technology solutions for enterprise-level digital transformation, trading systems, data services, and regulatory compliance.
Contact:
Snehil Mishra
📧 snehil@ampstek.com
📞 Desk: 609-360-2673 Ext. 125
🔗 LinkedIn
🌐 www.ampstek.com