

DevOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer with 7+ years of experience, including 3 years in big data platforms. Contract length and pay rate are unspecified. Key skills include Python, Kubernetes, Docker, and experience with Apache Spark and MLOps practices.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Florida, United States
-
π§ - Skills detailed
#Bash #Scripting #Automation #Docker #Kafka (Apache Kafka) #HDFS (Hadoop Distributed File System) #Data Processing #Python #Monitoring #Airflow #Kubernetes #DevOps #Data Engineering #Kudu #YARN (Yet Another Resource Negotiator) #HBase #Scala #Spark (Apache Spark) #Apache Spark #Impala #Cloud #Hadoop #dbt (data build tool)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
β’ Over 7+ Years of experience in DevOps/SRE function, with at least 3 years emphasizing bigdata platforms, data engineering, and cloud infrastructure services.
β’ Strong experience with scripting languages for automation, such as Python, Bash, Go and monitoring automation.
β’ Experienced with designing, building, and maintaining scalable and robust cloud infrastructure using Kubernetes and Docker for containerization and orchestration.
β’ Knowledge of MLOps platforms and practices
β’ Hands-on experience with data processing tools like Apache Spark, dbt, Kafka, and Airflow.
β’ Strong knowledge of Hadoop components, including Spark Streaming, HDFS, HBase, YARN, Hive, Impala, Atlas, and Kudu.