

Senior Principal Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Principal Data Engineer, requiring 15+ years of experience and 5+ years in AWS/Azure solutions. Contract length is unspecified, with a pay rate of "unknown." Must be local to Reston, VA or Plano, TX.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
720
🗓️ - Date discovered
May 21, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Plano, TX
🧠 - Skills detailed
#Microsoft Power BI #PySpark #AWS S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Data Architecture #Azure #Cloud #Data Ingestion #Data Science #Data Engineering #"ETL (Extract #Transform #Load)" #Apache Spark #GDPR (General Data Protection Regulation) #Delta Lake #Databricks #Redshift #Spark (Apache Spark) #Big Data #Data Processing #AI (Artificial Intelligence) #Compliance #Data Privacy #Scala #BI (Business Intelligence) #Data Governance #S3 (Amazon Simple Storage Service) #Apache Airflow #Data Lake #Airflow #Migration
Role description
JustinBradley’s client, a leading source of mortgage financing, is seeking a highly skilled Senior Principal Data Engineer with deep expertise in data engineering and designing and implementing scalable, cloud-native data platforms using AWS and Azure. This individual will play a critical role in building robust, modern data architectures and driving real-time analytics, AI-driven insights, and data governance at scale.
Must be local to Reston, VA or Plano, TX for their on-site requirement.
Responsibilities:
• Architect and develop modern, cloud-native data platforms on AWS and Azure, ensuring scalability, reliability, and performance.
• Design and implement advanced data architectures including Data Lake, Delta Lake, Lakehouse, OneLake, and Data Mesh to support real-time analytics and AI initiatives.
• Lead the development and optimization of complex ETL/ELT pipelines, including Change Data Capture (CDC) mechanisms for high-velocity data ingestion and transformation.
• Integrate and manage enterprise data using tools like Databricks, Apache Spark, PySpark, Apache Airflow, and Apache Flink.
• Leverage AWS services such as S3, Redshift, Glue, and EMR, as well as Azure Data Lake, to enable secure, high-throughput data processing and analytics.
• Build and maintain self-service BI ecosystems using Power BI, promoting business agility and data democratization.
• Ensure enterprise-level data governance, quality, and compliance across data platforms and pipelines.
• Collaborate closely with data scientists, analysts, architects, and business stakeholders to ensure that data solutions meet strategic objectives.
• Stay abreast of emerging trends in big data, cloud computing, and analytics to continuously improve architecture and tools.
Requirements:
• 15+ years of experience in data engineering, architecture, and platform development.
• 5+ years of hands-on experience designing and deploying scalable, cloud-native solutions on AWS and/or Azure.
• Proven expertise in modern data architectures: Data Lake, Delta Lake, Lakehouse, OneLake, Data Mesh.
• Strong experience with Databricks, Apache Spark, PySpark, Airflow, Flink, and similar data tools.
• Deep knowledge of ETL/ELT, data ingestion, transformation, and Change Data Capture (CDC) strategies.
• Hands-on experience with AWS S3, Redshift, Glue, EMR, Azure Data Lake, and Power BI.
• Exceptional problem-solving skills and ability to work independently in a fast-paced environment.
• Experience leading cross-functional data engineering teams or enterprise-scale migration projects.
• Certifications in AWS, Azure, or Databricks are a strong plus.
• Familiarity with data privacy regulations and compliance (e.g., GDPR, HIPAA).
JustinBradley is an EO employer - Veterans/Disabled and other protected employees.
JustinBradley’s client, a leading source of mortgage financing, is seeking a highly skilled Senior Principal Data Engineer with deep expertise in data engineering and designing and implementing scalable, cloud-native data platforms using AWS and Azure. This individual will play a critical role in building robust, modern data architectures and driving real-time analytics, AI-driven insights, and data governance at scale.
Must be local to Reston, VA or Plano, TX for their on-site requirement.
Responsibilities:
• Architect and develop modern, cloud-native data platforms on AWS and Azure, ensuring scalability, reliability, and performance.
• Design and implement advanced data architectures including Data Lake, Delta Lake, Lakehouse, OneLake, and Data Mesh to support real-time analytics and AI initiatives.
• Lead the development and optimization of complex ETL/ELT pipelines, including Change Data Capture (CDC) mechanisms for high-velocity data ingestion and transformation.
• Integrate and manage enterprise data using tools like Databricks, Apache Spark, PySpark, Apache Airflow, and Apache Flink.
• Leverage AWS services such as S3, Redshift, Glue, and EMR, as well as Azure Data Lake, to enable secure, high-throughput data processing and analytics.
• Build and maintain self-service BI ecosystems using Power BI, promoting business agility and data democratization.
• Ensure enterprise-level data governance, quality, and compliance across data platforms and pipelines.
• Collaborate closely with data scientists, analysts, architects, and business stakeholders to ensure that data solutions meet strategic objectives.
• Stay abreast of emerging trends in big data, cloud computing, and analytics to continuously improve architecture and tools.
Requirements:
• 15+ years of experience in data engineering, architecture, and platform development.
• 5+ years of hands-on experience designing and deploying scalable, cloud-native solutions on AWS and/or Azure.
• Proven expertise in modern data architectures: Data Lake, Delta Lake, Lakehouse, OneLake, Data Mesh.
• Strong experience with Databricks, Apache Spark, PySpark, Airflow, Flink, and similar data tools.
• Deep knowledge of ETL/ELT, data ingestion, transformation, and Change Data Capture (CDC) strategies.
• Hands-on experience with AWS S3, Redshift, Glue, EMR, Azure Data Lake, and Power BI.
• Exceptional problem-solving skills and ability to work independently in a fast-paced environment.
• Experience leading cross-functional data engineering teams or enterprise-scale migration projects.
• Certifications in AWS, Azure, or Databricks are a strong plus.
• Familiarity with data privacy regulations and compliance (e.g., GDPR, HIPAA).
JustinBradley is an EO employer - Veterans/Disabled and other protected employees.