

Databricks Developer with Apache Spark
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Databricks Developer with Apache Spark in Charlotte, NC, on a contract basis. Key skills include Databricks, Apache Spark, ETL processes, and proficiency in Python, PySpark, Scala, and SQL. Experience with AWS and Delta Sharing is required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 20, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#Databricks #Snowflake #Data Architecture #ML (Machine Learning) #Data Pipeline #Data Quality #SAP #SQL (Structured Query Language) #Azure #Programming #Cloud #Data Science #AWS (Amazon Web Services) #Apache Spark #Databases #Python #Data Analysis #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #NoSQL #PySpark #Scala
Role description
Role: Databricks Developer with Apache Spark
Location: Charlotte, NC (Day 1 Onsite)
Job Type: Contract
Description:
• The candidate will be responsible for designing, implementing, and optimizing data solutions using Databricks platform.
Key Responsibilities:
• Develop, maintain, and optimize data pipelines, workflows using Databricks.
• Integrate with third party applications like Salesforce, SAP, and external file feed.
• Implement ETL processes and ensure data quality and integrity.
• Design and implement scalable data architectures and workflows.
• Perform data analysis and generate actionable insights.
• Monitor and troubleshoot performance issues.
Required Skills and Qualifications:
• Proven experience with Databricks and Apache Spark.
• Strong experience in implementing medallion architecture.
• Strong understanding of data warehousing concepts and ETL processes.
• Strong experience in Delta Sharing with Snowflake, Fabric OneLake and AWS.
• Proficiency in programming languages such as Python, PySpark, Scala and SQL.
• Experience with cloud platforms like Azure and AWS.
Preferred Qualifications:
• Experience with machine learning and data science.
• Certifications in Databricks or cloud platforms.
• Knowledge of SQL and NoSQL databases.
Role: Databricks Developer with Apache Spark
Location: Charlotte, NC (Day 1 Onsite)
Job Type: Contract
Description:
• The candidate will be responsible for designing, implementing, and optimizing data solutions using Databricks platform.
Key Responsibilities:
• Develop, maintain, and optimize data pipelines, workflows using Databricks.
• Integrate with third party applications like Salesforce, SAP, and external file feed.
• Implement ETL processes and ensure data quality and integrity.
• Design and implement scalable data architectures and workflows.
• Perform data analysis and generate actionable insights.
• Monitor and troubleshoot performance issues.
Required Skills and Qualifications:
• Proven experience with Databricks and Apache Spark.
• Strong experience in implementing medallion architecture.
• Strong understanding of data warehousing concepts and ETL processes.
• Strong experience in Delta Sharing with Snowflake, Fabric OneLake and AWS.
• Proficiency in programming languages such as Python, PySpark, Scala and SQL.
• Experience with cloud platforms like Azure and AWS.
Preferred Qualifications:
• Experience with machine learning and data science.
• Certifications in Databricks or cloud platforms.
• Knowledge of SQL and NoSQL databases.