

BrickRed Systems
Senior Data Engineer (Azure + Databricks Lakehouse)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Azure + Databricks Lakehouse) with a contract length of "Unknown," offering a pay rate of "Unknown." Key skills include Azure Data Factory, PySpark, and experience with Databricks Lakehouse. Relevant certifications preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
352
-
🗓️ - Date
May 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bellevue, WA
-
🧠 - Skills detailed
#Storage #"ETL (Extract #Transform #Load)" #Data Architecture #Data Processing #PostgreSQL #Data Storage #Code Reviews #Data Lake #Compliance #Cloud #ML (Machine Learning) #Agile #Kanban #Kafka (Apache Kafka) #Deployment #Leadership #Spark (Apache Spark) #SQL (Structured Query Language) #Stories #Distributed Computing #PySpark #Scrum #Data Pipeline #Scala #Azure #Automation #Documentation #Indexing #ADF (Azure Data Factory) #Consulting #AI (Artificial Intelligence) #Azure DevOps #DevOps #Data Engineering #Azure Data Factory #Databricks #Azure cloud
Role description
We are seeking a highly skilled Senior Data Engineer to join a high-performing data engineering team focused on building scalable data pipelines, integrations, and enterprise-grade data products that support analytics and operational use cases.
This role requires deep expertise in Azure cloud data engineering within a Databricks Lakehouse environment. The ideal candidate will collaborate with engineers, architects, analysts, and product managers to design and implement robust, scalable, and high-performance data solutions.
You will work with minimal supervision, exercise strong technical judgment, and proactively recommend and implement solutions aligned with business and technology goals.
Key Responsibilities
Data Engineering & Technical Delivery
• Design, develop, and maintain scalable data pipelines and ETL/ELT processes using PySpark and SparkSQL
• Build and manage orchestration workflows using Azure Data Factory
• Work with Azure Data Lake and cloud-based storage systems for large-scale data processing
• Implement streaming solutions using Kafka and/or Azure Event Hub
• Optimize data pipelines for performance, scalability, reliability, and cost efficiency
• Apply best practices for data partitioning, indexing, and storage formats such as Parquet
• Analyze DAGs and system performance to identify bottlenecks and improve efficiency
• Implement and maintain robust CI/CD pipelines using Azure DevOps
System Design & Architecture
• Contribute to data architecture design including HLDs, LLDs, and data models
• Understand system interactions, dependencies, and cross-platform data flows
• Build end-to-end data solutions across ingestion, transformation, and consumption layers
• Apply distributed computing concepts such as fault tolerance, idempotency, and scalability
• Identify opportunities to automate and optimize existing data processes
Cross-Functional Collaboration
• Partner with engineering, analytics, and product teams to deliver scalable technical solutions
• Translate business requirements into technical designs and implementation strategies
• Lead technical discussions and contribute to sprint planning and solution design
• Mentor junior engineers and support overall team development
Code Quality, Testing & Documentation
• Write clean, maintainable, and efficient code aligned with engineering standards
• Conduct code reviews and ensure adherence to best practices
• Develop and review unit tests and test plans
• Maintain technical documentation including architecture diagrams and process documentation
• Perform root cause analysis (RCA) and implement quality improvements
Project & Delivery Management
• Deliver assigned modules and user stories within timelines
• Support effort estimation, sprint planning, and release management activities
• Monitor delivery progress and ensure compliance with engineering standards
• Participate in deployment and production support processes
Innovation & Continuous Improvement
• Design and implement modern data engineering solutions and frameworks
• Evaluate emerging technologies and explore AI/ML and Agentic AI use cases
• Continuously improve systems for performance, scalability, and maintainability
• Operate effectively in fast-paced and evolving environments
Communication & Leadership
• Create clear technical documentation and presentations for stakeholders
• Communicate architecture decisions, implementation strategies, and technical processes
• Mentor engineers and contribute to knowledge-sharing initiatives
• Collaborate with stakeholders to clarify requirements and present solutions
Required Skills & Qualifications
Technical Skills
• Strong hands-on experience with the Azure Data Engineering ecosystem, including:
• Azure Data Factory
• Azure Data Lake
• Azure DevOps (CI/CD)
• Proficiency in:
• SQL (T-SQL, PostgreSQL)
• PySpark
• SparkSQL
• Experience with:
• Databricks Lakehouse architecture
• Kafka and/or Azure Event Hub
• Parquet and modern data storage formats
• Strong understanding of:
• Data partitioning and indexing
• Distributed computing principles
• Performance tuning and optimization of data pipelines
Professional Skills
• Strong analytical and problem-solving abilities
• Ability to work independently with minimal supervision
• Excellent communication and documentation skills
• Experience working in Agile environments (Scrum/Kanban)
• Ability to manage multiple priorities in fast-paced environments
Preferred Qualifications
• Experience designing end-to-end data platforms or lakehouse architectures
• Exposure to AI/ML or Agentic AI applications
• Prior experience mentoring or leading engineering teams
• Relevant Azure or Data Engineering certifications
Performance Expectations
• Deliver high-quality, scalable, and maintainable solutions
• Adhere to coding standards and engineering best practices
• Reduce defects and improve system performance
• Contribute to team knowledge sharing and continuous improvement initiatives
About Brickred Systems:
Brickred Systems is a global leader in next-generation technology, consulting, and business process service companies. We enable clients to navigate their digital transformation. Brickred Systems delivers a range of consulting services to our clients across multiple industries around the world. Our practices employ highly skilled and experienced individuals with a client-centric passion for innovation and delivery excellence.
With ISO 27001 and ISO 9001 certification and over a decade of experience in managing the systems and workings of global enterprises, we harness the power of cognitive computing hyper-automation, robotics, cloud, analytics, and emerging technologies to help our clients adapt to the digital world and make them successful. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.
We are seeking a highly skilled Senior Data Engineer to join a high-performing data engineering team focused on building scalable data pipelines, integrations, and enterprise-grade data products that support analytics and operational use cases.
This role requires deep expertise in Azure cloud data engineering within a Databricks Lakehouse environment. The ideal candidate will collaborate with engineers, architects, analysts, and product managers to design and implement robust, scalable, and high-performance data solutions.
You will work with minimal supervision, exercise strong technical judgment, and proactively recommend and implement solutions aligned with business and technology goals.
Key Responsibilities
Data Engineering & Technical Delivery
• Design, develop, and maintain scalable data pipelines and ETL/ELT processes using PySpark and SparkSQL
• Build and manage orchestration workflows using Azure Data Factory
• Work with Azure Data Lake and cloud-based storage systems for large-scale data processing
• Implement streaming solutions using Kafka and/or Azure Event Hub
• Optimize data pipelines for performance, scalability, reliability, and cost efficiency
• Apply best practices for data partitioning, indexing, and storage formats such as Parquet
• Analyze DAGs and system performance to identify bottlenecks and improve efficiency
• Implement and maintain robust CI/CD pipelines using Azure DevOps
System Design & Architecture
• Contribute to data architecture design including HLDs, LLDs, and data models
• Understand system interactions, dependencies, and cross-platform data flows
• Build end-to-end data solutions across ingestion, transformation, and consumption layers
• Apply distributed computing concepts such as fault tolerance, idempotency, and scalability
• Identify opportunities to automate and optimize existing data processes
Cross-Functional Collaboration
• Partner with engineering, analytics, and product teams to deliver scalable technical solutions
• Translate business requirements into technical designs and implementation strategies
• Lead technical discussions and contribute to sprint planning and solution design
• Mentor junior engineers and support overall team development
Code Quality, Testing & Documentation
• Write clean, maintainable, and efficient code aligned with engineering standards
• Conduct code reviews and ensure adherence to best practices
• Develop and review unit tests and test plans
• Maintain technical documentation including architecture diagrams and process documentation
• Perform root cause analysis (RCA) and implement quality improvements
Project & Delivery Management
• Deliver assigned modules and user stories within timelines
• Support effort estimation, sprint planning, and release management activities
• Monitor delivery progress and ensure compliance with engineering standards
• Participate in deployment and production support processes
Innovation & Continuous Improvement
• Design and implement modern data engineering solutions and frameworks
• Evaluate emerging technologies and explore AI/ML and Agentic AI use cases
• Continuously improve systems for performance, scalability, and maintainability
• Operate effectively in fast-paced and evolving environments
Communication & Leadership
• Create clear technical documentation and presentations for stakeholders
• Communicate architecture decisions, implementation strategies, and technical processes
• Mentor engineers and contribute to knowledge-sharing initiatives
• Collaborate with stakeholders to clarify requirements and present solutions
Required Skills & Qualifications
Technical Skills
• Strong hands-on experience with the Azure Data Engineering ecosystem, including:
• Azure Data Factory
• Azure Data Lake
• Azure DevOps (CI/CD)
• Proficiency in:
• SQL (T-SQL, PostgreSQL)
• PySpark
• SparkSQL
• Experience with:
• Databricks Lakehouse architecture
• Kafka and/or Azure Event Hub
• Parquet and modern data storage formats
• Strong understanding of:
• Data partitioning and indexing
• Distributed computing principles
• Performance tuning and optimization of data pipelines
Professional Skills
• Strong analytical and problem-solving abilities
• Ability to work independently with minimal supervision
• Excellent communication and documentation skills
• Experience working in Agile environments (Scrum/Kanban)
• Ability to manage multiple priorities in fast-paced environments
Preferred Qualifications
• Experience designing end-to-end data platforms or lakehouse architectures
• Exposure to AI/ML or Agentic AI applications
• Prior experience mentoring or leading engineering teams
• Relevant Azure or Data Engineering certifications
Performance Expectations
• Deliver high-quality, scalable, and maintainable solutions
• Adhere to coding standards and engineering best practices
• Reduce defects and improve system performance
• Contribute to team knowledge sharing and continuous improvement initiatives
About Brickred Systems:
Brickred Systems is a global leader in next-generation technology, consulting, and business process service companies. We enable clients to navigate their digital transformation. Brickred Systems delivers a range of consulting services to our clients across multiple industries around the world. Our practices employ highly skilled and experienced individuals with a client-centric passion for innovation and delivery excellence.
With ISO 27001 and ISO 9001 certification and over a decade of experience in managing the systems and workings of global enterprises, we harness the power of cognitive computing hyper-automation, robotics, cloud, analytics, and emerging technologies to help our clients adapt to the digital world and make them successful. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.






