Chuwa America Corporation

Lead Big Data Engineer - Only USC or GC

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Big Data Engineer with 10+ years of experience, focusing on Hadoop, Apache Spark, Python, and Scala. It offers a remote contract for $55.00 - $60.00 per hour, requiring expertise in data pipelines and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
March 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#Distributed Computing #Data Analysis #Datasets #Linux #Pig #AWS (Amazon Web Services) #Cloud #GCP (Google Cloud Platform) #Data Modeling #Big Data #Python #SQL (Structured Query Language) #Scala #Data Pipeline #Data Transformations #Visualization #Hadoop #DevOps #Apache Spark #R #Data Processing #Tableau #Data Engineering #Programming #"ETL (Extract #Transform #Load)" #Unix #Documentation #Spark (Apache Spark) #Kafka (Apache Kafka) #Java
Role description
Job Title: Big Data Engineer Experience: 10+ YearsLocation: Remote Job Summary We are seeking an experienced Big Data Engineer with strong expertise in Hadoop ecosystem, Apache Spark, Python, and Scala programming. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and big data solutions. This role requires strong hands-on experience with distributed data processing frameworks, cloud platforms, and data engineering tools. Key Responsibilities Design, develop, and maintain scalable big data pipelines and data processing systems. Work with large-scale data using Hadoop ecosystem tools such as Hive, Pig, and Oozie. Develop distributed data processing applications using Apache Spark and Scala. Build and optimize ETL pipelines for structured and unstructured data. Work with RDD APIs in Spark for large-scale data transformations and analytics. Implement data streaming solutions using Kafka for real-time data processing. Collaborate with cross-functional teams including data analysts, architects, and business stakeholders. Deploy and manage big data solutions on AWS or GCP cloud platforms. Ensure performance optimization, scalability, and reliability of data systems. Maintain documentation, follow best practices, and ensure high-quality code standards. Required Skills 10+ years of experience in Big Data and Data Engineering. Strong hands-on experience with Hadoop ecosystem (Hive, Pig, Oozie). Expertise in Apache Spark and Scala programming. Hands-on programming experience in Python and Core Java. Strong understanding of Spark RDD APIs and distributed computing. Experience with Kafka or other streaming frameworks. Solid understanding of Data Warehousing concepts and Data Modeling techniques. Strong proficiency in SQL and working with large datasets. Experience working in Linux/Unix environments. Experience with AWS or GCP cloud platforms. Knowledge of ETL design and development. Good to Have Experience with data visualization and analytics tools such as Tableau or R. Experience with real-time data processing architectures. Exposure to CI/CD pipelines and DevOps practices. Experience with performance tuning and optimization in Spark/Hadoop environments. Pay: $55.00 - $60.00 per hour Application Question(s): Are you comfortable to come on our W2? Work Location: Remote