

The Brixton Group
Sr. Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a 12+ month contract, 100% remote (EST hours). Key skills include Databricks, Apache Spark, SQL, and cloud experience (Azure preferred). Retail industry experience is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Salisbury, NC
-
🧠 - Skills detailed
#Monitoring #Data Modeling #Datasets #Data Governance #Data Processing #GIT #Apache Spark #Security #Cloud #GCP (Google Cloud Platform) #Spark (Apache Spark) #Data Engineering #Azure #Databricks #Delta Lake #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Data Architecture #Data Quality #"ETL (Extract #Transform #Load)" #Scala #PySpark #SQL (Structured Query Language)
Role description
Duration: 12+ months
Location: 100% remote (EST hours)
Responsibilities
• Design, build, and optimize scalable ETL/ELT pipelines using Databricks and Apache Spark.
• Develop high-performance data solutions on cloud platforms, primarily Azure.
• Ensure data quality, reliability, scalability, and performance across data workflows.
• Collaborate with IT and business stakeholders to deliver curated and analytics-ready datasets.
• Automate and orchestrate workflows using Databricks Jobs, CI/CD pipelines, and related tools.
• Implement best practices around data governance, monitoring, and platform security.
Qualifications
• Strong hands-on experience with Databricks, Apache Spark (PySpark and/or Scala), SQL, and Kafka.
• Experience building data solutions in cloud environments such as Azure, AWS, or GCP.
• Knowledge of Delta Lake, CDC, distributed data processing, and data modeling concepts.
• Familiarity with Git, CI/CD pipelines, and workflow orchestration tools.
• Solid understanding of data architecture, performance tuning, and optimization techniques.
• Retail industry experience is a plus.
26-004425
Duration: 12+ months
Location: 100% remote (EST hours)
Responsibilities
• Design, build, and optimize scalable ETL/ELT pipelines using Databricks and Apache Spark.
• Develop high-performance data solutions on cloud platforms, primarily Azure.
• Ensure data quality, reliability, scalability, and performance across data workflows.
• Collaborate with IT and business stakeholders to deliver curated and analytics-ready datasets.
• Automate and orchestrate workflows using Databricks Jobs, CI/CD pipelines, and related tools.
• Implement best practices around data governance, monitoring, and platform security.
Qualifications
• Strong hands-on experience with Databricks, Apache Spark (PySpark and/or Scala), SQL, and Kafka.
• Experience building data solutions in cloud environments such as Azure, AWS, or GCP.
• Knowledge of Delta Lake, CDC, distributed data processing, and data modeling concepts.
• Familiarity with Git, CI/CD pipelines, and workflow orchestration tools.
• Solid understanding of data architecture, performance tuning, and optimization techniques.
• Retail industry experience is a plus.
26-004425






