Epitria Consulting

Big Data Engineer - Contract to Hire - W2 Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer on a contract-to-hire basis with a focus on Java/J2EE, Agile, and CI/CD. Key skills include proficiency in Python or Scala, Spark, AWS Data Lake services, and data modeling.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 29, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Wilmington, DE
-
🧠 - Skills detailed
#Deployment #Databricks #Data Lakehouse #Datasets #Hadoop #Scala #Data Engineering #JSON (JavaScript Object Notation) #Data Storage #Java #Data Processing #Storage #Agile #Data Lake #Spark (Apache Spark) #Python #Continuous Deployment #Airflow #Big Data #Databases #NoSQL #AWS (Amazon Web Services) #Programming #Cloud #Data Integrity
Role description
We are seeking a Big Data Engineer for one of our clients. The ideal candidate will have strong experience in Java/J2EE development, Agile delivery, and CI/CD practices and a proven ability to design, build, test, deploy, and maintain secure, resilient applications. Responsibilities: β€’ Design, develop and maintain scalable and large-scale data processing pipelines and infrastructure on the cloud following engineering standards, governance standards and technology best practices β€’ Develop and optimize data models for large-scale datasets, ensuring efficient storage, retrieval, and analytics while maintaining data integrity and quality. β€’ Collaborate with cross-functional teams to translate business requirements into scalable and effective data engineering solutions. β€’ Demonstrate a passion for innovation and continuous improvement in data engineering, proactively identifying opportunities to enhance data infrastructure, data processing and analytics capabilities. Required Skills & Qualifications: β€’ Strong analytical problem solving and critical thinking skills β€’ Proficiency in at least one programming language ( Python, if not Java or Scala) β€’ Proficiency in at least one distributed data processing framework ( Spark or similar) β€’ Proficiency in at least one cloud data Lakehouse platforms (AWS Data lake services or Databricks, alternatively Hadoop), β€’ Proficiency in at least one scheduling/orchestration tools ( Airflow, if not AWS Step Functions or similar) β€’ Proficiency with relational and NoSQL databases. β€’ Proficiency in data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), and big-data storage formats (Parquet, Iceberg, or similar), β€’ Experience working in teams following Agile methodology β€’ Experience with test-driven development (TDD) or behavior-driven development (BDD) practices, as well as working with continuous integration and continuous deployment (CI/CD) tools.