

Sr. DevOps Engineer - Big Data Platform
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. DevOps Engineer focused on Big Data Platform, offering a fully remote position. Contract length is unspecified, with a pay rate of "unknown." Key skills include extensive DevOps/SRE experience, cloud infrastructure, and proficiency in Apache Spark, Hadoop, and CI/CD tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Orlando, FL
-
π§ - Skills detailed
#Databases #GIT #HDFS (Hadoop Distributed File System) #Data Management #Scripting #Docker #Cloud #dbt (data build tool) #Hadoop #Spark (Apache Spark) #Compliance #Data Engineering #Data Lake #Jenkins #Scala #DevOps #Bash #Deployment #Kubernetes #Storage #Data Processing #Snowflake #Java #Python #Apache Spark #Automation #Big Data #AI (Artificial Intelligence) #Data Lakehouse #Kafka (Apache Kafka) #Data Governance #Kerberos #JavaScript #Data Pipeline #Airflow
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Join a dynamic team as a Sr. DevOps Engineer, where you will play a pivotal role in maintaining and optimizing a dedicated data environment. This fully remote position offers the chance to work with cutting-edge technology and collaborate with talented professionals.
Responsibilities
β’ Maintain and support a dedicated data environment within BigData infrastructure.
β’ Plan capacity and manage data pruning, while administering databases efficiently.
β’ Enhance system performance through tuning and optimization of tables and queries.
β’ Manage access control, secrets, and ensure compliance with data governance policies.
β’ Support data management and ingestion projects, ensuring seamless integration.
β’ Monitor and optimize system performance and reliability, resolving issues in data pipelines and storage solutions.
β’ Develop and manage CI/CD pipelines to automate testing, integration, and deployment processes.
β’ Collaborate with data engineering and development teams for seamless deployment integration.
Skills
β’ Extensive experience in DevOps/SRE, focusing on big data platforms and cloud infrastructure.
β’ Proficient in designing and maintaining scalable cloud infrastructure using Kubernetes and Docker.
β’ Hands-on experience with data processing tools like Apache Spark, dbt, Kafka, and Airflow.
β’ Strong knowledge of Hadoop components such as Spark Streaming, HDFS, and Hive.
β’ Expertise in leveraging SCM and build tools like Git, Jenkins, and Docker for CI/CD operations.
β’ Proficient in scripting languages such as Python, Bash, and Go for automation.
Preferred Skills
β’ Experience with MLOps platforms and practices.
β’ Familiarity with Snowflake, ClickHouse, and data lakehouse technologies.
β’ Knowledge of securing Hadoop stack using Sentry, Ranger, and Kerberos KDC.
β’ Background in software development with experience in OOP languages like Java and JavaScript.
We are committed to fostering an inclusive environment where diversity, equity, and inclusion are at the forefront of our company values. We welcome applicants from all backgrounds to apply and join our team.
Once you apply for this position, you may receive a phone call, SMS or email at the time of application from our Virtual AI Recruiter, Alex, to conduct an initial interview.