

Senior ADB Developer - Azure Databricks / Python / Spark Streaming
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior ADB Developer for a contract of unspecified length, offering $62.22 - $65.27 per hour. Required skills include 10+ years in data engineering, Azure Databricks, Python, and Spark Streaming, with hybrid remote work in Pleasanton, CA.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Pleasanton, CA 94566
-
π§ - Skills detailed
#Data Engineering #Data Extraction #Compliance #Schema Design #NoSQL #Synapse #"ETL (Extract #Transform #Load)" #Azure cloud #Data Manipulation #Database Management #Databases #Data Processing #Azure Databricks #Batch #Data Quality #Scala #Spark (Apache Spark) #JSON (JavaScript Object Notation) #Programming #Business Analysis #Apache Kafka #Data Lake #NumPy #Snowflake #Java #MongoDB #Indexing #Automation #Big Data #Cloud #Data Conversion #Azure #Data Science #DevOps #Azure DevOps #JavaScript #Python #Libraries #GIT #SQL Queries #Data Pipeline #AWS (Amazon Web Services) #Data Governance #Database Schema #SQL (Structured Query Language) #Databricks #Pandas #BigQuery #Apache Spark #Data Modeling #GitHub #Kafka (Apache Kafka) #Version Control #Security #Debugging #Data Integration
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job SummaryJob Description:
Must Have Skills β
Skill 1 β Yrs of Exp β Azure Databricks
Skill 2 β Yrs of Exp β Python
Skill 3 β Yrs of Exp β Spark Streaming
Good To have Skills β
Skill 1 β Yrs of Exp β Java or Scala
Skill 2 β Yrs of Exp β CI/CD pipelines
Skill 3 β Yrs of Exp βGitHub workflows
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_About the Role:We are looking for a highly skilled Senior Azure Databricks (ADB) Developer to join our Data Engineering team. This role involves developing large-scale batch and streaming data pipelines on Azure Cloud. The ideal candidate will have strong expertise in Python, Databricks Notebooks, Apache Spark (including Structured Streaming), and real-time integration with Kafka. You will work with both relational databases like DB2 and NoSQL systems such as MongoDB, focusing on performance optimization and scalable architecture.Key Responsibilities:
Design and Develop: Create real-time and batch data pipelines using Azure Databricks, Apache Spark, and Structured Streaming.
Data Processing: Write efficient ETL scripts and automate workflows using Python.
Data Integration: Integrate with various data sources and destinations, including DB2, MongoDB, and other enterprise-grade data systems.
Performance Optimization: Tune Spark jobs for optimal performance and cost-effective compute usage on Azure.
Collaboration: Work with platform and architecture teams to ensure secure, scalable, and maintainable cloud data infrastructure.
CI/CD Support: Implement CI/CD for Databricks pipelines and notebooks using tools like GitHub and Azure DevOps.
Stakeholder Communication: Interface with product owners, data scientists, and business analysts to translate data requirements into production-ready pipelines.
Required Skills:
10+ years of experience in data engineering
Python Proficiency:
Data Manipulation: Using libraries like Pandas and NumPy for data manipulation and analysis.
Data Processing: Writing efficient ETL scripts.
Automation: Automating repetitive tasks and workflows.
Debugging: Strong debugging skills to troubleshoot and optimize code.
Database Management:
SQL: Advanced SQL skills for querying and managing relational databases.
NoSQL: Experience with NoSQL databases like MongoDB or Cassandra.
Data Warehousing: Knowledge of data warehousing solutions like Google BigQuery or Snowflake.
Big Data Technologies:
Kafka: Knowledge of data streaming platforms like Apache Kafka.
Version Control:
Git: Using version control systems for collaborative development.
Data Modeling:
Schema Design: Designing efficient and scalable database schemas.
Data Governance: Ensuring data quality, security, and compliance.
Database Management:
DB2: Understanding of DB2 architecture, SQL queries, and database management.
MongoDB: Knowledge of MongoDB schema design, indexing, and query optimization.
Programming Skills:
Proficiency in languages such as Java, Python, or JavaScript to write scripts for data extraction and transformation.
Experience with BSON (Binary JSON) for data conversion.
Cloud Services:
Experience with cloud platforms like AWS or Azure for deploying and managing databases.
Preferred Skills:
Experience with Java or Scala in Spark streaming.
Familiarity with Azure services like Data Lake, Data Factory, Synapse, and Event Hubs.
Background in building data platforms in regulated or large-scale enterprise environments.
Job Type: Contract
Pay: $62.22 - $65.27 per hour
Expected hours: 40 per week
Work Location: Hybrid remote in Pleasanton, CA 94566