

Flipped.ai - Transforming Talent Acquisition with AI
Apache Spark – L1 Support
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Apache Spark – L1 Support Engineer in Austin, TX, offering $50/hr for a contract position. Key skills include strong experience in Apache Spark and Kubernetes, along with knowledge of Python and PySpark.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
October 2, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Spark (Apache Spark) #PySpark #Python #S3 (Amazon Simple Storage Service) #Kubernetes #Deployment #Hadoop #Apache Spark
Role description
Position: Apache Spark – L1 Support
Location: Austin, TX (Onsite)
Interview Mode: Video
Relocation: Yes
Rate: $50/hr
Job Description:
We are looking for a professional with strong hands-on experience in Apache Spark and Kubernetes for our client to join our team as an L1 Support Engineer. The ideal candidate should possess a solid technical foundation in Spark and Kubernetes, with additional working knowledge of Python, PySpark, and a theoretical understanding of Hadoop. This role involves working in a collaborative environment where adaptability, troubleshooting ability, and a willingness to learn are highly valued.
Key Responsibilities:
• Provide L1 support for Apache Spark in an environment that is heavily reliant on Spark and Kubernetes.
• Assist in onboarding and supporting deployment on Kubernetes, ensuring smooth operations and minimal downtime.
• Carry out support-related tasks, particularly around Spark workloads, deployment configurations, and environment stability.
• Troubleshoot issues related to Spark, including Spark skews and bad nodes on Kubernetes, and provide timely resolutions.
• Utilize experience in PySpark and Python for support-related tasks and contribute to improvements in deployment and execution.
• Work with the existing technology stack, primarily on S3, while maintaining a theoretical understanding of Hadoop and demonstrating a willingness to learn and expand in this area.
• Adapt quickly to new processes, technologies, and project requirements as the environment evolves.
Required Skills & Experience:
• Apache Spark + Kubernetes: Must have very good experience and proven ability to support and troubleshoot Spark on Kubernetes.
• PySpark / Python: Must have some hands-on experience and ability to apply it effectively in support tasks.
• Hadoop: Good to have theoretical knowledge, with readiness to learn further as needed.
• Support Tasks: Prior experience in deployment support, troubleshooting Spark-related issues, and handling cluster-level challenges.
• Ability to learn and adapt quickly to new processes and project requirements.
• Strong troubleshooting skills, particularly in Spark performance issues and Kubernetes node-related problems.
• Able to clearly provide context on recent contributions with Python in a professional environment.
This position is well-suited for someone good with Spark and Kubernetes, possesses a working knowledge of Python and PySpark, and has the potential to grow and adapt within a fast-paced technical environment.
Position: Apache Spark – L1 Support
Location: Austin, TX (Onsite)
Interview Mode: Video
Relocation: Yes
Rate: $50/hr
Job Description:
We are looking for a professional with strong hands-on experience in Apache Spark and Kubernetes for our client to join our team as an L1 Support Engineer. The ideal candidate should possess a solid technical foundation in Spark and Kubernetes, with additional working knowledge of Python, PySpark, and a theoretical understanding of Hadoop. This role involves working in a collaborative environment where adaptability, troubleshooting ability, and a willingness to learn are highly valued.
Key Responsibilities:
• Provide L1 support for Apache Spark in an environment that is heavily reliant on Spark and Kubernetes.
• Assist in onboarding and supporting deployment on Kubernetes, ensuring smooth operations and minimal downtime.
• Carry out support-related tasks, particularly around Spark workloads, deployment configurations, and environment stability.
• Troubleshoot issues related to Spark, including Spark skews and bad nodes on Kubernetes, and provide timely resolutions.
• Utilize experience in PySpark and Python for support-related tasks and contribute to improvements in deployment and execution.
• Work with the existing technology stack, primarily on S3, while maintaining a theoretical understanding of Hadoop and demonstrating a willingness to learn and expand in this area.
• Adapt quickly to new processes, technologies, and project requirements as the environment evolves.
Required Skills & Experience:
• Apache Spark + Kubernetes: Must have very good experience and proven ability to support and troubleshoot Spark on Kubernetes.
• PySpark / Python: Must have some hands-on experience and ability to apply it effectively in support tasks.
• Hadoop: Good to have theoretical knowledge, with readiness to learn further as needed.
• Support Tasks: Prior experience in deployment support, troubleshooting Spark-related issues, and handling cluster-level challenges.
• Ability to learn and adapt quickly to new processes and project requirements.
• Strong troubleshooting skills, particularly in Spark performance issues and Kubernetes node-related problems.
• Able to clearly provide context on recent contributions with Python in a professional environment.
This position is well-suited for someone good with Spark and Kubernetes, possesses a working knowledge of Python and PySpark, and has the potential to grow and adapt within a fast-paced technical environment.