

Techvy Corp
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Phoenix, Arizona, offering a W2 contract with a pay rate of "unknown." Requires 6+ years in big data, strong Java skills, and expertise in Spark, SQL, and cloud platforms (AWS/GCP).
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 25, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Phoenix, AZ
-
π§ - Skills detailed
#Scala #Hadoop #Storage #Programming #Big Data #Scrum #Pig #Cloud #Distributed Computing #Datasets #Java #Data Pipeline #Data Quality #Spark (Apache Spark) #Data Engineering #Mathematics #Data Processing #Computer Science #Data Ingestion #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Batch #Data Modeling #PySpark #AWS (Amazon Web Services) #Python #Agile #Data Science
Role description
β’
β’
β’
β’
β’
β’
β’ Senior Data Engineer || W2 Opportunity || On Site || Phoenix - Arizona
β’
β’
β’
β’
β’
β’
β’
β’
β’ We are looking for a Senior Data Engineer with strong Java development skills and a deep understanding of big data ecosystems. The ideal candidate will have extensive hands-on experience in building and optimizing data pipelines, large-scale processing frameworks, and distributed systems.
πΉ Key Responsibilities
β’ Design, build, and maintain large-scale data processing systems using Java, Spark, and SQL.
β’ Develop and optimize streaming and batch data pipelines.
β’ Work with Hadoop, Spark Streaming, DataFrames, and related frameworks to handle high-volume datasets.
β’ Collaborate with data scientists, architects, and product teams to ensure scalable and efficient data solutions.
β’ Perform data ingestion, transformation, and storage across cloud environments (AWS / GCP).
β’ Implement and maintain best practices for data quality, reliability, and performance optimization.
β’ Write clean, efficient, and reusable code in Java, Python, and/or Scala.
πΉ Required Skills & Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Applied Mathematics, or related field (or equivalent experience).
β’ 6+ years of experience as a Big Data Engineer with proven project delivery.
β’ Strong Java programming experience.
β’ Proficiency in Spark Streaming, SQL, and Hadoop ecosystems.
β’ Expertise in DataFrames, PySpark, Scala, Hive, Pig, and MapReduce.
β’ Experience with AWS or GCP data platforms.
β’ Strong understanding of data modeling, ETL, and distributed computing.
β’ Hands-on experience working in Agile / Scrum environments.
β’ Python experience is a huge plus.
β’
β’
β’
β’
β’
β’
β’ Senior Data Engineer || W2 Opportunity || On Site || Phoenix - Arizona
β’
β’
β’
β’
β’
β’
β’
β’
β’ We are looking for a Senior Data Engineer with strong Java development skills and a deep understanding of big data ecosystems. The ideal candidate will have extensive hands-on experience in building and optimizing data pipelines, large-scale processing frameworks, and distributed systems.
πΉ Key Responsibilities
β’ Design, build, and maintain large-scale data processing systems using Java, Spark, and SQL.
β’ Develop and optimize streaming and batch data pipelines.
β’ Work with Hadoop, Spark Streaming, DataFrames, and related frameworks to handle high-volume datasets.
β’ Collaborate with data scientists, architects, and product teams to ensure scalable and efficient data solutions.
β’ Perform data ingestion, transformation, and storage across cloud environments (AWS / GCP).
β’ Implement and maintain best practices for data quality, reliability, and performance optimization.
β’ Write clean, efficient, and reusable code in Java, Python, and/or Scala.
πΉ Required Skills & Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Applied Mathematics, or related field (or equivalent experience).
β’ 6+ years of experience as a Big Data Engineer with proven project delivery.
β’ Strong Java programming experience.
β’ Proficiency in Spark Streaming, SQL, and Hadoop ecosystems.
β’ Expertise in DataFrames, PySpark, Scala, Hive, Pig, and MapReduce.
β’ Experience with AWS or GCP data platforms.
β’ Strong understanding of data modeling, ETL, and distributed computing.
β’ Hands-on experience working in Agile / Scrum environments.
β’ Python experience is a huge plus.






