

Intellectt Inc
Database Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Database Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires strong experience in data operations, CI/CD, containerization, and programming, with a focus on cloud and hybrid environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
March 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Observability #Scala #Terraform #Snowflake #Infrastructure as Code (IaC) #Airflow #DevOps #Cloud #Data Science #ML (Machine Learning) #Apache Spark #Storage #Batch #Trino #Databases #Apache Airflow #Data Catalog #Java #Security #Splunk #Kubernetes #Compliance #Data Storage #Programming #Data Pipeline #Data Governance #Luigi #Automation #Agile #Jupyter #Monitoring #Grafana #Spark (Apache Spark) #Deployment #Datadog #MLflow #Python #Docker #Databricks
Role description
Position Overview:
• We are seeking a highly motivated Data Operations Engineer to support the design, deployment, and operational management of data, analytics, and machine learning infrastructure. In this role, you will collaborate closely with infrastructure, platform, engineering, and data science teams to ensure reliable and scalable data systems. The ideal candidate will have strong problem-solving abilities, excellent communication skills, and hands-on experience managing modern data platforms in cloud and hybrid environments.
Key Responsibilities:
• Lead day-to-day data platform operations to ensure consistent delivery of high-quality analytics and infrastructure services.
• Provision, scale, and maintain data, analytics, and ML infrastructure supporting both batch and real-time systems.
• Support data pipelines, frameworks, tools, and services operating in hybrid cloud environments.
• Implement and manage zero-downtime deployments using CI/CD practices to rapidly deliver features and improvements.
• Collaborate with platform teams to build and maintain monitoring, observability, alerting, and self-healing tools.
• Troubleshoot and resolve complex issues in distributed systems and manage production incidents effectively.
• Conduct post-incident reviews and implement improvements to prevent recurrence.
• Develop automation and self-service tools to improve engineering productivity and service quality.
• Ensure adherence to security controls, governance policies, and compliance standards.
• Monitor and optimize infrastructure performance and operational costs.
• Work within Agile environments to support rapidly evolving data platforms and analytics initiatives.
Required Qualifications:
• Strong experience managing data operations, analytics infrastructure, or database engineering environments.
• Hands-on experience with CI/CD pipelines and DevOps practices.
• Experience with containerization and orchestration technologies such as Docker and Kubernetes.
• Proficiency in at least one programming language such as Python, Go, Rust, Java, or Scala.
• Experience with monitoring and observability tools such as Splunk, Grafana, or Datadog.
• Strong troubleshooting skills in distributed systems and production environments.
• Experience implementing Infrastructure as Code (IaC) using tools like Terraform.
• Excellent communication and collaboration skills.
Preferred Qualifications:
• Experience working with modern analytics and machine learning platforms, including: Apache Spark, Trino / Apache Pinot, Apache Flink, Apache Airflow / Luigi; Snowflake; Databricks; MLFlow; Data Catalogs and Jupyter Notebooks.
• Familiarity with data storage and distributed database technologies such as Cassandra and vector databases.
• Experience supporting data governance, security frameworks, and compliance processes.
• Experience working in fast-paced, large-scale cloud or hybrid infrastructure environments.
Position Overview:
• We are seeking a highly motivated Data Operations Engineer to support the design, deployment, and operational management of data, analytics, and machine learning infrastructure. In this role, you will collaborate closely with infrastructure, platform, engineering, and data science teams to ensure reliable and scalable data systems. The ideal candidate will have strong problem-solving abilities, excellent communication skills, and hands-on experience managing modern data platforms in cloud and hybrid environments.
Key Responsibilities:
• Lead day-to-day data platform operations to ensure consistent delivery of high-quality analytics and infrastructure services.
• Provision, scale, and maintain data, analytics, and ML infrastructure supporting both batch and real-time systems.
• Support data pipelines, frameworks, tools, and services operating in hybrid cloud environments.
• Implement and manage zero-downtime deployments using CI/CD practices to rapidly deliver features and improvements.
• Collaborate with platform teams to build and maintain monitoring, observability, alerting, and self-healing tools.
• Troubleshoot and resolve complex issues in distributed systems and manage production incidents effectively.
• Conduct post-incident reviews and implement improvements to prevent recurrence.
• Develop automation and self-service tools to improve engineering productivity and service quality.
• Ensure adherence to security controls, governance policies, and compliance standards.
• Monitor and optimize infrastructure performance and operational costs.
• Work within Agile environments to support rapidly evolving data platforms and analytics initiatives.
Required Qualifications:
• Strong experience managing data operations, analytics infrastructure, or database engineering environments.
• Hands-on experience with CI/CD pipelines and DevOps practices.
• Experience with containerization and orchestration technologies such as Docker and Kubernetes.
• Proficiency in at least one programming language such as Python, Go, Rust, Java, or Scala.
• Experience with monitoring and observability tools such as Splunk, Grafana, or Datadog.
• Strong troubleshooting skills in distributed systems and production environments.
• Experience implementing Infrastructure as Code (IaC) using tools like Terraform.
• Excellent communication and collaboration skills.
Preferred Qualifications:
• Experience working with modern analytics and machine learning platforms, including: Apache Spark, Trino / Apache Pinot, Apache Flink, Apache Airflow / Luigi; Snowflake; Databricks; MLFlow; Data Catalogs and Jupyter Notebooks.
• Familiarity with data storage and distributed database technologies such as Cassandra and vector databases.
• Experience supporting data governance, security frameworks, and compliance processes.
• Experience working in fast-paced, large-scale cloud or hybrid infrastructure environments.






