

ConfigUSA
Looking for Data Bricks Data Engineer-Open to 100% On-site: St Louis, Dallas/ Plano ,-US CITIZEN OR GREEN CARD Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Bricks Data Engineer, requiring 3-5 years of experience, strong Python and PySpark skills, and expertise in Azure services. The contract is on-site in St. Louis or Dallas/Plano, available to US citizens or Green Card holders, with competitive pay.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Governance #SQL (Structured Query Language) #Spark (Apache Spark) #PySpark #Security #Data Access #Data Engineering #Deployment #Synapse #Data Processing #AutoScaling #Programming #DevOps #Data Pipeline #Agile #Azure Active Directory #Azure #Vault #GitLab #"ETL (Extract #Transform #Load)" #Scrum #Data Bricks #Azure DevOps #Compliance #Azure Databricks #Data Integration #Python #Databricks #Monitoring #Storage
Role description
Role: Data Bricks Data engineer
Seattle is highly preferred
Open to 100% on-site: St louis, Dallas/ Plano , Charleston SC ,Ridley park Pennsylvania
3 OPEN POSITIONS
US CITIZEN OR GREEN CARD
Must Have Technical/Functional Skills
· Strong programming skills in Python and PySpark.
· Advanced proficiency writing SQL for analytics and ETL processes.
· Proven experience building and optimizing complex data pipelines in Azure.
· Hands-on experience with Azure Databricks: cluster management, job scheduling, workspace governance.
· Strong working knowledge of core Azure services:
Storage Account, Synapse, Key Vault, VMSS, Function Apps, Web Apps, Log Analytics Workspace, service principals, and managed identities.
· Experience with container services (ACA, container instances) and containerized data workloads.
· Familiarity with Azure networking concepts and secure network integration for data platforms.
· Experience creating Azure infrastructure using ARM templates.
· Proficient with GitLab and Azure DevOps for CI/CD and source control workflows.
· Strong analytical, problem-solving, and communication skills; proven ability to work cross-functionally. Experience working in Agile teams and understanding of data governance frameworks
Responsibilities
· Design, develop, and maintain end-to-end data pipelines and ETL/ELT workflows using PySpark and Python.
· Implement, optimize, and monitor large-scale data processing workloads in Azure Databricks, including cluster configuration, autoscaling, and governance.
· Build and maintain data integration and orchestration solutions using Azure services to meet performance, availability, and security requirements.
· Collaborate with data consumers, thread authors/owners, and stakeholders to gather business requirements, prioritize needs, and translate analytical objectives into technical designs.
· Implement secure data access patterns using Azure Active Directory, Managed Identities, and service principals.
· Author Infrastructure-as-Code for Azure resources (ARM templates) and deploy consistent, repeatable environments.
· Configure and operate Azure components including Storage Account, Synapse, Key Vault, VMSS, Function Apps, Web Apps, Log Analytics Workspace, Azure Container Apps / container instances, and related services.
· Collaborate with networking and security teams to design and implement Azure networking for data solutions.
· Implement monitoring, alerting, and cost optimization for data workloads (Log Analytics, metrics, and dashboards).
· Use GitLab and Azure DevOps for source control, CI/CD pipelines, and release management.
· Follow Agile/Scrum practices and participate in sprint planning, standups, and retrospectives.
· Ensure solutions meet data governance, lineage, and compliance requirements.
· Operations Support and Oncall Support for Production Issues and Deployments.
· Excellent communication Skills
· Ability to collaborate with Legacy and Modernize application teams and stake holders
Role: Data Bricks Data engineer
Seattle is highly preferred
Open to 100% on-site: St louis, Dallas/ Plano , Charleston SC ,Ridley park Pennsylvania
3 OPEN POSITIONS
US CITIZEN OR GREEN CARD
Must Have Technical/Functional Skills
· Strong programming skills in Python and PySpark.
· Advanced proficiency writing SQL for analytics and ETL processes.
· Proven experience building and optimizing complex data pipelines in Azure.
· Hands-on experience with Azure Databricks: cluster management, job scheduling, workspace governance.
· Strong working knowledge of core Azure services:
Storage Account, Synapse, Key Vault, VMSS, Function Apps, Web Apps, Log Analytics Workspace, service principals, and managed identities.
· Experience with container services (ACA, container instances) and containerized data workloads.
· Familiarity with Azure networking concepts and secure network integration for data platforms.
· Experience creating Azure infrastructure using ARM templates.
· Proficient with GitLab and Azure DevOps for CI/CD and source control workflows.
· Strong analytical, problem-solving, and communication skills; proven ability to work cross-functionally. Experience working in Agile teams and understanding of data governance frameworks
Responsibilities
· Design, develop, and maintain end-to-end data pipelines and ETL/ELT workflows using PySpark and Python.
· Implement, optimize, and monitor large-scale data processing workloads in Azure Databricks, including cluster configuration, autoscaling, and governance.
· Build and maintain data integration and orchestration solutions using Azure services to meet performance, availability, and security requirements.
· Collaborate with data consumers, thread authors/owners, and stakeholders to gather business requirements, prioritize needs, and translate analytical objectives into technical designs.
· Implement secure data access patterns using Azure Active Directory, Managed Identities, and service principals.
· Author Infrastructure-as-Code for Azure resources (ARM templates) and deploy consistent, repeatable environments.
· Configure and operate Azure components including Storage Account, Synapse, Key Vault, VMSS, Function Apps, Web Apps, Log Analytics Workspace, Azure Container Apps / container instances, and related services.
· Collaborate with networking and security teams to design and implement Azure networking for data solutions.
· Implement monitoring, alerting, and cost optimization for data workloads (Log Analytics, metrics, and dashboards).
· Use GitLab and Azure DevOps for source control, CI/CD pipelines, and release management.
· Follow Agile/Scrum practices and participate in sprint planning, standups, and retrospectives.
· Ensure solutions meet data governance, lineage, and compliance requirements.
· Operations Support and Oncall Support for Production Issues and Deployments.
· Excellent communication Skills
· Ability to collaborate with Legacy and Modernize application teams and stake holders





