

Sr. Big Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Big Data Engineer on a W2 contract in Weehawken, NJ, requiring 10+ years of banking domain experience. Key skills include Apache Spark, Azure Databricks, Terraform, and programming in Scala, Java, or Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 7, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Weehawken, NJ
-
π§ - Skills detailed
#Agile #Microsoft Azure #Data Quality #Automation #Databricks #Kubernetes #Programming #Terraform #Security #Data Governance #Apache Spark #Azure #Data Processing #Big Data #Observability #Docker #Scala #Data Engineering #Spark (Apache Spark) #Python #Deployment #Compliance #GitLab #Java #Data Pipeline #Azure Databricks #Infrastructure as Code (IaC) #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Sr. Big Data Engineer
Weehawken, NJ
Strong Banking Domain Experience Is Required
Only Local candidates to NJ/NY
Position type: W2 contract.
Our Challenge
β’ As a Big Data Engineer, candidate will be instrumental in developing and optimizing big data technologies such as Apache Spark and Azure Databricks.
β’ Candidate will create robust data pipelines, ensure high data quality, and collaborate closely with cross-functional teams to implement scalable cloud-native solutions on major cloud providers, predominantly Microsoft Azure.
The Role
Responsibilities
β’ Develop, optimize, and maintain big data pipelines using Apache Spark, Azure Databricks, and related tools.
β’ Write and deploy complex production systems in Scala, Java, or Python, ensuring high performance and reliability.
β’ Leverage Terraform for automating cloud infrastructure deployment on Azure or other major cloud providers.
β’ Design and implement CI/CD pipelines for data workflows using modern tools like GitLab, promoting automation and continuous improvement.
β’ Apply best practices for testing, instrumentation, observability, and alerting to maintain data pipeline health and performance.
β’ Contribute to system architecture and low-level design, ensuring modularity, scalability, and security.
β’ Understand and implement data models, data structures, and algorithms for efficient data processing.
β’ Use containerization technologies such as Docker and Kubernetes for development, build, and runtime environments.
β’ Work within Agile development teams, participating in planning, daily stand-ups, and iterative releases.
β’ Collaborate effectively in a global team, influencing key architectural decisions and sharing best practices.
Requirements
β’ 10+ Years of experience is required.
β’ Proven experience developing big data solutions with Apache Spark and Azure Databricks.
β’ Hands-on experience with Terraform for infrastructure as code (IaC) on major cloud platforms, ideally Azure.
β’ Programming expertise in Scala, Java, or Python for complex production systems.
β’ Extensive Python experience, especially in data engineering contexts.
β’ Background in platform engineering roles on cloud platforms, with strong knowledge of Azure services.
β’ Practical knowledge of testing frameworks, instrumentation, observability, and alerting tools.
β’ Experience building and maintaining CI/CD pipelines with modern cloud-friendly systems like GitLab.
β’ Deep understanding of information modelling, data structures, and algorithms.
β’ Hands-on experience with containerization (Docker) and orchestration tools (Kubernetes).
β’ Strong understanding of technical architecture and low-level system design.
β’ Familiarity with Agile methodologies and best practices in software development.
Preferred, But Not Required
β’ Experience working with financial or risk management data.
β’ Knowledge of data governance, security standards, and compliance in cloud environments.
β’ Familiarity with additional cloud providers and multi-cloud deployment strategies.
β’ Contributions to open-source data projects or communities