Techvy Corp

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Phoenix, Arizona, offering a W2 contract with a pay rate of "unknown." Requires 6+ years in big data, strong Java skills, and expertise in Spark, SQL, and cloud platforms (AWS/GCP).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 25, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Scala #Hadoop #Storage #Programming #Big Data #Scrum #Pig #Cloud #Distributed Computing #Datasets #Java #Data Pipeline #Data Quality #Spark (Apache Spark) #Data Engineering #Mathematics #Data Processing #Computer Science #Data Ingestion #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Batch #Data Modeling #PySpark #AWS (Amazon Web Services) #Python #Agile #Data Science
Role description
β€’ β€’ β€’ β€’ β€’ β€’ β€’ Senior Data Engineer || W2 Opportunity || On Site || Phoenix - Arizona β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ We are looking for a Senior Data Engineer with strong Java development skills and a deep understanding of big data ecosystems. The ideal candidate will have extensive hands-on experience in building and optimizing data pipelines, large-scale processing frameworks, and distributed systems. πŸ”Ή Key Responsibilities β€’ Design, build, and maintain large-scale data processing systems using Java, Spark, and SQL. β€’ Develop and optimize streaming and batch data pipelines. β€’ Work with Hadoop, Spark Streaming, DataFrames, and related frameworks to handle high-volume datasets. β€’ Collaborate with data scientists, architects, and product teams to ensure scalable and efficient data solutions. β€’ Perform data ingestion, transformation, and storage across cloud environments (AWS / GCP). β€’ Implement and maintain best practices for data quality, reliability, and performance optimization. β€’ Write clean, efficient, and reusable code in Java, Python, and/or Scala. πŸ”Ή Required Skills & Qualifications β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, Applied Mathematics, or related field (or equivalent experience). β€’ 6+ years of experience as a Big Data Engineer with proven project delivery. β€’ Strong Java programming experience. β€’ Proficiency in Spark Streaming, SQL, and Hadoop ecosystems. β€’ Expertise in DataFrames, PySpark, Scala, Hive, Pig, and MapReduce. β€’ Experience with AWS or GCP data platforms. β€’ Strong understanding of data modeling, ETL, and distributed computing. β€’ Hands-on experience working in Agile / Scrum environments. β€’ Python experience is a huge plus.