

Brooksource
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills required include Python, PySpark, Azure Databricks, and experience in regulated industries.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 8, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Atlanta Metropolitan Area
-
π§ - Skills detailed
#Delta Lake #Data Lake #Cloud #GitHub #Spark (Apache Spark) #Azure Databricks #PySpark #Azure #Automation #Documentation #Data Quality #API (Application Programming Interface) #Python #Logic Apps #AI (Artificial Intelligence) #Azure DevOps #DevOps #Data Engineering #Data Enrichment #Databricks #Metadata #Strategy
Role description
Seeking a Cloud Data & AI Platform Engineer to support our client, Southern Nuclear, in designing, building, and operating advanced data and AI orchestration capabilities within their Azure/Databricks-based Lakehouse environment.
This is a hands-on engineering role focused on building reliable, governed, and auditable automation of data and analytics workflows using Azure Databricks and related Azure services. The ideal candidate will have strong experience in Python, PySpark, Databricks, and AI-enabled workflow orchestration within highly regulated enterprise environments.
What Youβll Be Doing:
β’ Design and build reusable orchestration frameworks in Python for multi-step analytics, data quality checks, and AI-assisted workflows
β’ Develop controlled AI-enabled components supporting data validation, metadata enrichment, diagnostics, and automation
β’ Build and support orchestration and AI workloads within Azure Databricks using Delta Lake and Medallion Architecture (Bronze / Silver / Gold)
β’ Manage Databricks Workflows, Jobs, and Unity Catalog for governance and access control
β’ Design secure API integrations with internal applications and approved external systems
β’ Optimize Spark workloads, orchestration performance, and cloud cost efficiency
β’ Support CI/CD pipelines using GitHub Actions or Azure DevOps
β’ Contribute to architecture decisions, technical standards, documentation, and long-term platform strategy
What Weβre Looking For:
β’ 5+ years of experience in software engineering, cloud engineering, or data engineering
β’ Advanced Python experience including object-oriented design and event-driven architecture
β’ Strong experience with PySpark, Delta Lake, and enterprise data lake architectures
β’ Hands-on experience with Azure Databricks and related Azure services
β’ Experience building AI-assisted or LLM-enabled workflows using structured orchestration patterns
β’ Experience with Azure Functions, Logic Apps, and/or Azure container services
β’ CI/CD experience using GitHub Actions or Azure DevOps
β’ Strong understanding of secure development practices in regulated environments
Preferred Qualifications:
β’ Experience in utilities, energy, nuclear, or other highly regulated industries
β’ Experience handling sensitive operational, telemetry, or regulatory data
β’ Familiarity with enterprise governance and auditability requirements
Seeking a Cloud Data & AI Platform Engineer to support our client, Southern Nuclear, in designing, building, and operating advanced data and AI orchestration capabilities within their Azure/Databricks-based Lakehouse environment.
This is a hands-on engineering role focused on building reliable, governed, and auditable automation of data and analytics workflows using Azure Databricks and related Azure services. The ideal candidate will have strong experience in Python, PySpark, Databricks, and AI-enabled workflow orchestration within highly regulated enterprise environments.
What Youβll Be Doing:
β’ Design and build reusable orchestration frameworks in Python for multi-step analytics, data quality checks, and AI-assisted workflows
β’ Develop controlled AI-enabled components supporting data validation, metadata enrichment, diagnostics, and automation
β’ Build and support orchestration and AI workloads within Azure Databricks using Delta Lake and Medallion Architecture (Bronze / Silver / Gold)
β’ Manage Databricks Workflows, Jobs, and Unity Catalog for governance and access control
β’ Design secure API integrations with internal applications and approved external systems
β’ Optimize Spark workloads, orchestration performance, and cloud cost efficiency
β’ Support CI/CD pipelines using GitHub Actions or Azure DevOps
β’ Contribute to architecture decisions, technical standards, documentation, and long-term platform strategy
What Weβre Looking For:
β’ 5+ years of experience in software engineering, cloud engineering, or data engineering
β’ Advanced Python experience including object-oriented design and event-driven architecture
β’ Strong experience with PySpark, Delta Lake, and enterprise data lake architectures
β’ Hands-on experience with Azure Databricks and related Azure services
β’ Experience building AI-assisted or LLM-enabled workflows using structured orchestration patterns
β’ Experience with Azure Functions, Logic Apps, and/or Azure container services
β’ CI/CD experience using GitHub Actions or Azure DevOps
β’ Strong understanding of secure development practices in regulated environments
Preferred Qualifications:
β’ Experience in utilities, energy, nuclear, or other highly regulated industries
β’ Experience handling sensitive operational, telemetry, or regulatory data
β’ Familiarity with enterprise governance and auditability requirements






