Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in New York, NY, requiring 8+ years of experience, strong Scala proficiency, and solid Apache Spark skills. W2 only, with a pay rate of "$X/hour", focused on cloud platforms and ETL tools.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 14, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Agile #Airflow #Kubernetes #HBase #Scala #AWS (Amazon Web Services) #Data Engineering #Batch #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #GCP (Google Cloud Platform) #Snowflake #Kafka (Apache Kafka) #Docker #Databricks #Delta Lake #Cloud #Version Control #Apache Spark #Hadoop #Programming #SQL (Structured Query Language) #Azure
Role description
Job Role: Data Engineer Location: New York, NY (Onsite) Experience: 8+ years Work Mode: W2 Only No C2C Visas: H4EAD,GC and USC Job Description: β€’ Strong proficiency in Scala programming. β€’ Solid hands-on experience with Apache Spark (Batch and/or Streaming). β€’ Familiarity with Hadoop ecosystem, Hive, Kafka, or HBase. β€’ Experience with SQL and data transformation logic. β€’ Understanding of software engineering best practices (version control, CI/CD, testing). β€’ Experience with cloud platforms such as AWS, Azure, or GCP is a plus. β€’ Experience with Delta Lake, Databricks, or Snowflake. β€’ Familiarity with containerization and orchestration (Docker, Kubernetes, Airflow). β€’ Experience working in an Agile environment. β€’ Knowledge of data warehousing and ETL tools.