Iris Software Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with expertise in Databricks, requiring long-term W2 contract work in Boston, MA. Key skills include Apache Spark, Delta Lake, ETL development, and experience with Azure/AWS/GCP.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 3, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Boston, MA
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Azure #Data Science #Delta Lake #Data Processing #AWS (Amazon Web Services) #Storage #Data Analysis #Databricks #BI (Business Intelligence) #Data Governance #Compliance #Data Engineering #Scala #Data Pipeline #Security #Data Quality #Cloud #GCP (Google Cloud Platform) #Spark (Apache Spark) #Apache Spark #Data Ingestion #Monitoring
Role description
Position Title: Data Engineer – Databricks (W2 Role) Department: Data & Analytics Location: Boston, MA Employment Type: Long-term W2 Contract only About the Role We are seeking a skilled Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines. The ideal candidate will have hands-on expertise with Apache Spark, Delta Lake, ETL development, and cloud data platforms (Azure/AWS/GCP). You will work closely with data analysts, data scientists, and business stakeholders to enable high-quality data solutions that support analytics and business intelligence initiatives. Key Responsibilities β€’ Design, develop, and maintain scalable ETL/ELT pipelines using Databricks and Apache Spark. β€’ Build and optimize Delta Lake architectures for high‑performance data processing. β€’ Collaborate with cross-functional teams to understand requirements and translate them into technical solutions. β€’ Develop data ingestion frameworks from various structured and unstructured data sources. β€’ Implement data quality checks, data validation frameworks, and monitoring systems. β€’ Optimize performance of data pipelines for scalability and reliability. β€’ Work with cloud platforms (Azure/AWS/GCP) to manage storage, compute, and orchestration services. β€’ Ensure best practices around data governance, security, and compliance. β€’ Troubleshoot data pipeline issues and provide root-cause analysis. β€’ Document technical designs, workflows, and data models.