

Azure Data Architect - Data Lake/ Data Bricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Architect - Data Lake/Data Bricks SME in Raleigh, NC, hybrid (4 onsite/1 remote), lasting over 24 months, with a pay rate of "insert rate". Requires 15+ years IT experience, strong Azure Cloud skills, and expertise in Databricks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 26, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Raleigh, NC
-
π§ - Skills detailed
#Deployment #Security #Visualization #NLP (Natural Language Processing) #Spark (Apache Spark) #DevOps #Hadoop #Azure Databricks #Databricks #Compliance #Continuous Deployment #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Perl #GIT #Scripting #Databases #Cloud #Data Bricks #Data Lake #Monitoring #Automation #Azure #SQL (Structured Query Language) #Tableau #Python #C++ #Migration #SAS #Documentation #Azure cloud #Data Architecture #Ruby #Sqoop (Apache Sqoop) #Apache Spark
Role description
Job Title: Azure Data Architect - Data Lake/ Data Bricks SME
Location: Raleigh, NC / Hybrid (4 days onsite/1 remote)
Duration: Long-term (24+ Months)
Roles and Responsibilities include but not limited to:
Minimum 15 years IT Architecture/Systems experience
10 yearsβ experience designing technical and business solutions, mentoring and training client staff, and overseeing implementation
Highly diverse technical and industry experience related to systems needs, development, process analysis, design, and reengineering
Possesses skills and experience related to business management, systems engineering, operations research, and management engineering
Assist with deployment, configuration, and management of Azure Cloud environment
Assist with migration efforts of existing ETL jobs into Azure/ Databricks cloud environment
Assist ETL staff with Cloud best practices, efficiency optimization, and knowledge sharing
Assist with operationalizing deployments and Cloud services for ETL Operations, including automation, documentation, and knowledge articles
Basic Qualifications:
Strong skills and experience in Cloud Operations support in Azure
Strong experience supporting large-scale/enterprise level Cloud environments
Strong experience supporting cloud services (compute, network, databases, etc.)
Experience using Databricks or other Spark-based platforms
Knowledge/experience with Sqoop, Oozie, Flume
Fluency in scripting languages: Python, Perl, Ruby (or equivalent)
Ability to troubleshoot cloud resource problems and perform complex system tests
Ability to automate solutions to repetitive problems/tasks
Experience with Git integration in continuous deployment
Experience with DevOps monitoring tools
Knowledge of best practices and IT operations in always-up/always-available environments
Documentation skills for internal procedures and services
Strong communication skills with team members
Significant experience with:
SAS, Python, C++, Hadoop, SQL (Database/ Coding), Apache Spark, Machine Learning, Natural Language Processing, Visualization tools (Tableau), Demonstrated experience with unstructured data
Data Transfer Requirements:
The selected resource will conduct knowledge-transfer sessions covering:
Monitoring (Infrastructure and Application-specific)
Troubleshooting
Incident Alerting and Avoidance
Dashboard Development
Settings and Configuration
Security
Testing of platform/applications
Preparing audit compliance reports
Reporting:
Monthly status reports must include:
Work progress
Identified risks/ technical issues with recommendations and contingency plans
Schedule deviations with proposed recovery plans
Education:
Masterβs Degree from an accredited College/University in the applicable field of services
Job Title: Azure Data Architect - Data Lake/ Data Bricks SME
Location: Raleigh, NC / Hybrid (4 days onsite/1 remote)
Duration: Long-term (24+ Months)
Roles and Responsibilities include but not limited to:
Minimum 15 years IT Architecture/Systems experience
10 yearsβ experience designing technical and business solutions, mentoring and training client staff, and overseeing implementation
Highly diverse technical and industry experience related to systems needs, development, process analysis, design, and reengineering
Possesses skills and experience related to business management, systems engineering, operations research, and management engineering
Assist with deployment, configuration, and management of Azure Cloud environment
Assist with migration efforts of existing ETL jobs into Azure/ Databricks cloud environment
Assist ETL staff with Cloud best practices, efficiency optimization, and knowledge sharing
Assist with operationalizing deployments and Cloud services for ETL Operations, including automation, documentation, and knowledge articles
Basic Qualifications:
Strong skills and experience in Cloud Operations support in Azure
Strong experience supporting large-scale/enterprise level Cloud environments
Strong experience supporting cloud services (compute, network, databases, etc.)
Experience using Databricks or other Spark-based platforms
Knowledge/experience with Sqoop, Oozie, Flume
Fluency in scripting languages: Python, Perl, Ruby (or equivalent)
Ability to troubleshoot cloud resource problems and perform complex system tests
Ability to automate solutions to repetitive problems/tasks
Experience with Git integration in continuous deployment
Experience with DevOps monitoring tools
Knowledge of best practices and IT operations in always-up/always-available environments
Documentation skills for internal procedures and services
Strong communication skills with team members
Significant experience with:
SAS, Python, C++, Hadoop, SQL (Database/ Coding), Apache Spark, Machine Learning, Natural Language Processing, Visualization tools (Tableau), Demonstrated experience with unstructured data
Data Transfer Requirements:
The selected resource will conduct knowledge-transfer sessions covering:
Monitoring (Infrastructure and Application-specific)
Troubleshooting
Incident Alerting and Avoidance
Dashboard Development
Settings and Configuration
Security
Testing of platform/applications
Preparing audit compliance reports
Reporting:
Monthly status reports must include:
Work progress
Identified risks/ technical issues with recommendations and contingency plans
Schedule deviations with proposed recovery plans
Education:
Masterβs Degree from an accredited College/University in the applicable field of services