

Arctiq
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 3-month contract at 25 hours per week, focusing on Databricks, ETL/ELT processes, and AWS. Key skills include PySpark, advanced SQL, and Terraform. Extensive Databricks experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 4, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Delta Lake #EC2 #AWS (Amazon Web Services) #Databricks #Data Lineage #Data Pipeline #Data Engineering #SQL (Structured Query Language) #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Terraform #Data Lake #"ETL (Extract #Transform #Load)" #Security #Data Lakehouse #Data Ingestion #PySpark #Scala #Infrastructure as Code (IaC) #Metadata #Cloud
Role description
Company Overview
Arctiq is a leader in professional IT services and managed services across three core Centers of Excellence: Enterprise Security, Modern Infrastructure and Platform Engineering. Renowned for our ability to architect intelligence, we connect, protect, and transform organizations, empowering them to thrive in today's digital landscape. Arctiq builds on decades of industry expertise and a customer-centric ethos to deliver exceptional value to clients across diverse industries.
Position Overview
We are looking for a Data Engineer to lead the development of scalable data pipelines within the Databricks ecosystem. You will be responsible for architecting robust ETL/ELT processes using a "configuration-as-code" approach, ensuring our data lakehouse is governed, performant, and production-ready.
This is a 3-month contract opportunity at 25 hours per week.
Responsibilities
• Pipeline Architecture: Design and implement declarative data pipelines using Lakeflow and Databricks Asset Bundles (DABs) to ensure seamless CI/CD.
• Data Ingestion: Build efficient, scalable ingestion patterns using AutoLoader and Change Data Capture (CDC) to handle high-volume data streams.
• Governance & Security: Manage metadata, lineage, and access control through Unity Catalog.
• Orchestration: Develop and maintain complex workflows using Databricks Jobs and orchestration tools.
• Infrastructure as Code: (Asset) Utilize Terraform to manage AWS resources (S3, EC2) and Databricks workspaces.
Qualifications
• Expertise: Deep mastery of PySpark and advanced SQL.
• Platform: Extensive experience in the Databricks environment (Workflows, Delta Lake).
• Cloud: Familiarity with AWS infrastructure and cloud-native data patterns.
Arctiq is an equal opportunity employer. If you need any accommodations or adjustments throughout the interview process and beyond, please let us know. We celebrate our inclusive work environment and welcome members of all backgrounds and perspectives to apply.
We thank you for your interest in joining the Arctiq team! While we welcome all applicants, only those who are selected for an interview will be contacted.
Company Overview
Arctiq is a leader in professional IT services and managed services across three core Centers of Excellence: Enterprise Security, Modern Infrastructure and Platform Engineering. Renowned for our ability to architect intelligence, we connect, protect, and transform organizations, empowering them to thrive in today's digital landscape. Arctiq builds on decades of industry expertise and a customer-centric ethos to deliver exceptional value to clients across diverse industries.
Position Overview
We are looking for a Data Engineer to lead the development of scalable data pipelines within the Databricks ecosystem. You will be responsible for architecting robust ETL/ELT processes using a "configuration-as-code" approach, ensuring our data lakehouse is governed, performant, and production-ready.
This is a 3-month contract opportunity at 25 hours per week.
Responsibilities
• Pipeline Architecture: Design and implement declarative data pipelines using Lakeflow and Databricks Asset Bundles (DABs) to ensure seamless CI/CD.
• Data Ingestion: Build efficient, scalable ingestion patterns using AutoLoader and Change Data Capture (CDC) to handle high-volume data streams.
• Governance & Security: Manage metadata, lineage, and access control through Unity Catalog.
• Orchestration: Develop and maintain complex workflows using Databricks Jobs and orchestration tools.
• Infrastructure as Code: (Asset) Utilize Terraform to manage AWS resources (S3, EC2) and Databricks workspaces.
Qualifications
• Expertise: Deep mastery of PySpark and advanced SQL.
• Platform: Extensive experience in the Databricks environment (Workflows, Delta Lake).
• Cloud: Familiarity with AWS infrastructure and cloud-native data patterns.
Arctiq is an equal opportunity employer. If you need any accommodations or adjustments throughout the interview process and beyond, please let us know. We celebrate our inclusive work environment and welcome members of all backgrounds and perspectives to apply.
We thank you for your interest in joining the Arctiq team! While we welcome all applicants, only those who are selected for an interview will be contacted.






