

Iris Software Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with expertise in Databricks, requiring long-term W2 contract work in Boston, MA. Key skills include Apache Spark, Delta Lake, ETL development, and experience with Azure/AWS/GCP.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 3, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Boston, MA
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Azure #Data Science #Delta Lake #Data Processing #AWS (Amazon Web Services) #Storage #Data Analysis #Databricks #BI (Business Intelligence) #Data Governance #Compliance #Data Engineering #Scala #Data Pipeline #Security #Data Quality #Cloud #GCP (Google Cloud Platform) #Spark (Apache Spark) #Apache Spark #Data Ingestion #Monitoring
Role description
Position Title: Data Engineer β Databricks (W2 Role)
Department: Data & Analytics
Location: Boston, MA
Employment Type: Long-term W2 Contract only
About the Role
We are seeking a skilled Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines. The ideal candidate will have hands-on expertise with Apache Spark, Delta Lake, ETL development, and cloud data platforms (Azure/AWS/GCP).
You will work closely with data analysts, data scientists, and business stakeholders to enable high-quality data solutions that support analytics and business intelligence initiatives.
Key Responsibilities
β’ Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and Apache Spark.
β’ Build and optimize Delta Lake architectures for highβperformance data processing.
β’ Collaborate with cross-functional teams to understand requirements and translate them into technical solutions.
β’ Develop data ingestion frameworks from various structured and unstructured data sources.
β’ Implement data quality checks, data validation frameworks, and monitoring systems.
β’ Optimize performance of data pipelines for scalability and reliability.
β’ Work with cloud platforms (Azure/AWS/GCP) to manage storage, compute, and orchestration services.
β’ Ensure best practices around data governance, security, and compliance.
β’ Troubleshoot data pipeline issues and provide root-cause analysis.
β’ Document technical designs, workflows, and data models.
Position Title: Data Engineer β Databricks (W2 Role)
Department: Data & Analytics
Location: Boston, MA
Employment Type: Long-term W2 Contract only
About the Role
We are seeking a skilled Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines. The ideal candidate will have hands-on expertise with Apache Spark, Delta Lake, ETL development, and cloud data platforms (Azure/AWS/GCP).
You will work closely with data analysts, data scientists, and business stakeholders to enable high-quality data solutions that support analytics and business intelligence initiatives.
Key Responsibilities
β’ Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and Apache Spark.
β’ Build and optimize Delta Lake architectures for highβperformance data processing.
β’ Collaborate with cross-functional teams to understand requirements and translate them into technical solutions.
β’ Develop data ingestion frameworks from various structured and unstructured data sources.
β’ Implement data quality checks, data validation frameworks, and monitoring systems.
β’ Optimize performance of data pipelines for scalability and reliability.
β’ Work with cloud platforms (Azure/AWS/GCP) to manage storage, compute, and orchestration services.
β’ Ensure best practices around data governance, security, and compliance.
β’ Troubleshoot data pipeline issues and provide root-cause analysis.
β’ Document technical designs, workflows, and data models.






