

Signature IT World Inc
Databricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer in Boston, MA, on a contract basis. Key skills include Apache Spark, Databricks, and cloud services (Azure, AWS, GCP). Experience in ETL processes and data modeling is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#GIT #Snowflake #Azure #GCP (Google Cloud Platform) #Apache Spark #Scala #Data Modeling #AWS (Amazon Web Services) #Delta Lake #PySpark #Azure Data Factory #Cloud #DevOps #AWS Glue #Spark (Apache Spark) #Version Control #Databricks #"ETL (Extract #Transform #Load)" #Data Lake #ADF (Azure Data Factory) #Datasets #SQL (Structured Query Language) #Data Science #BigQuery #Data Lakehouse #Data Pipeline
Role description
Databricks Engineer – Boston, MA - Contract
Core Responsibilities
• Data Pipeline Development: Building and optimizing ETL/ELT pipelines using Apache Spark on Databricks.
• Data Lakehouse Management: Designing and maintaining scalable data Lakehouse architectures.
• Integration: Connecting Databricks with cloud services (Azure, AWS, GCP) and external data sources.
• Performance Tuning: Optimizing Spark jobs for speed and cost efficiency.
• Collaboration: Working with data scientists, analysts, and business stakeholders to deliver usable datasets.
Key Skills
• Apache Spark (PySpark, Scala, or SQL)
• Databricks Platform (clusters, notebooks, Delta Lake)
• Cloud Services (Azure Data Factory, AWS Glue, GCP BigQuery)
• Data Modeling (star schema, snowflake, lakehouse concepts)
• Version Control & CI/CD (Git, DevOps pipelines)
Databricks Engineer – Boston, MA - Contract
Core Responsibilities
• Data Pipeline Development: Building and optimizing ETL/ELT pipelines using Apache Spark on Databricks.
• Data Lakehouse Management: Designing and maintaining scalable data Lakehouse architectures.
• Integration: Connecting Databricks with cloud services (Azure, AWS, GCP) and external data sources.
• Performance Tuning: Optimizing Spark jobs for speed and cost efficiency.
• Collaboration: Working with data scientists, analysts, and business stakeholders to deliver usable datasets.
Key Skills
• Apache Spark (PySpark, Scala, or SQL)
• Databricks Platform (clusters, notebooks, Delta Lake)
• Cloud Services (Azure Data Factory, AWS Glue, GCP BigQuery)
• Data Modeling (star schema, snowflake, lakehouse concepts)
• Version Control & CI/CD (Git, DevOps pipelines)






