

Openkyber
Kafka Support Analyst
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 12+ month contract in Phoenix, AZ, paying $60-62/hr on W2. Requires 5+ years of data engineering experience, proficiency in Kafka, Spark, Python, and Google Cloud services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
496
-
🗓️ - Date
March 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alaska
-
🧠 - Skills detailed
#Azure #Agile #Data Science #Data Processing #AWS (Amazon Web Services) #NoSQL #PySpark #Storage #Python #Datasets #Data Governance #Spark SQL #Data Engineering #Scala #Data Architecture #Airflow #Public Cloud #Data Pipeline #Batch #Data Lake #Cloud #Programming #GCP (Google Cloud Platform) #Security #Spark (Apache Spark) #BigQuery #Data Lakehouse #Hadoop #Kafka (Apache Kafka) #Databases #SQL (Structured Query Language)
Role description
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING! This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below. Beware of scams. OpenKyber never asks for money during its onboarding process.
Job Title: Senior Data Engineer
Contract Length: 12+ Month contract
On Site Location: PHOENIX AZ 85027
Pay: 60-62 an hr on W2
Ref# 245130
Position Overview
We are seeking a highly skilled Senior Data Engineer to design, build, and support scalable data platforms within a cloud-first environment. This role will focus on developing modern streaming and batch data pipelines, supporting lakehouse architectures, and enabling advanced analytics across the organization. The ideal candidate thrives in a collaborative team setting and brings strong experience with distributed data processing, real-time streaming technologies, and public cloud data services.
Key Responsibilities
Design, develop, and maintain scalable data pipelines using Spark, Kafka, and cloud-native technologies.
Build and support real-time data streaming solutions leveraging Kafka, Flink, and Spark Streaming.
Implement and optimize data lakehouse architectures to support enterprise analytics and data science initiatives.
Develop data workflows using Python, PySpark, SQL, and orchestration tools such as Airflow or Cloud Composer.
Partner with cross-functional teams to deliver reliable, high-quality data solutions in a collaborative Agile environment.
Manage large datasets across distributed systems including Hadoop and cloud storage platforms.
Optimize data processing performance, reliability, and scalability.
Work with NoSQL databases and diverse data formats to support evolving business requirements.
Ensure data governance, security, and best practices are followed across the data ecosystem.
Required Qualifications
5+ years of data engineering experience, including hands-on work with Hadoop and Google Cloud data solutions.
Proven experience building Spark-based processing frameworks and Kafka streaming pipelines.
2+ years of hands-on experience developing data flows using Kafka, Flink, and Spark Streaming.
3+ years of experience designing and implementing data lakehouse architectures.
Strong programming experience with Python, PySpark, and SQL.
Hands-on experience with Google Cloud Platform, including Cloud Storage, BigQuery, Dataproc, and Cloud Composer.
2+ years working with NoSQL databases such as columnar, graph, document, or key-value stores.
Experience working within highly collaborative engineering teams.
Preferred Qualifications
Public cloud certification such as Google Cloud Platform Professional Data Engineer, Azure Data Engineer, or AWS Data Analytics Specialty.
Experience supporting both batch and real-time analytics workloads.
Familiarity with modern data architecture patterns and distributed systems design.
For applications and inquiries, contact: hirings@openkyber.com
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING! This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below. Beware of scams. OpenKyber never asks for money during its onboarding process.
Job Title: Senior Data Engineer
Contract Length: 12+ Month contract
On Site Location: PHOENIX AZ 85027
Pay: 60-62 an hr on W2
Ref# 245130
Position Overview
We are seeking a highly skilled Senior Data Engineer to design, build, and support scalable data platforms within a cloud-first environment. This role will focus on developing modern streaming and batch data pipelines, supporting lakehouse architectures, and enabling advanced analytics across the organization. The ideal candidate thrives in a collaborative team setting and brings strong experience with distributed data processing, real-time streaming technologies, and public cloud data services.
Key Responsibilities
Design, develop, and maintain scalable data pipelines using Spark, Kafka, and cloud-native technologies.
Build and support real-time data streaming solutions leveraging Kafka, Flink, and Spark Streaming.
Implement and optimize data lakehouse architectures to support enterprise analytics and data science initiatives.
Develop data workflows using Python, PySpark, SQL, and orchestration tools such as Airflow or Cloud Composer.
Partner with cross-functional teams to deliver reliable, high-quality data solutions in a collaborative Agile environment.
Manage large datasets across distributed systems including Hadoop and cloud storage platforms.
Optimize data processing performance, reliability, and scalability.
Work with NoSQL databases and diverse data formats to support evolving business requirements.
Ensure data governance, security, and best practices are followed across the data ecosystem.
Required Qualifications
5+ years of data engineering experience, including hands-on work with Hadoop and Google Cloud data solutions.
Proven experience building Spark-based processing frameworks and Kafka streaming pipelines.
2+ years of hands-on experience developing data flows using Kafka, Flink, and Spark Streaming.
3+ years of experience designing and implementing data lakehouse architectures.
Strong programming experience with Python, PySpark, and SQL.
Hands-on experience with Google Cloud Platform, including Cloud Storage, BigQuery, Dataproc, and Cloud Composer.
2+ years working with NoSQL databases such as columnar, graph, document, or key-value stores.
Experience working within highly collaborative engineering teams.
Preferred Qualifications
Public cloud certification such as Google Cloud Platform Professional Data Engineer, Azure Data Engineer, or AWS Data Analytics Specialty.
Experience supporting both batch and real-time analytics workloads.
Familiarity with modern data architecture patterns and distributed systems design.
For applications and inquiries, contact: hirings@openkyber.com




