

First Point Group
Software Data Engineer / Databricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Software Data Engineer with a 6-month contract, offering a pay rate of "XX" per hour. Required skills include 3+ years of Databricks on Azure, Apache Spark, Python, SQL, and experience with data pipelines and ML applications.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 7, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Spark (Apache Spark) #Monitoring #"ETL (Extract #Transform #Load)" #DevOps #Linux #Azure #SQL (Structured Query Language) #ML (Machine Learning) #Datadog #Datasets #Spark SQL #TensorFlow #Scripting #Azure Stream Analytics #Batch #Azure Event Hubs #PySpark #PyTorch #Synapse #Delta Lake #Data Lake #Azure DevOps #AI (Artificial Intelligence) #Data Engineering #Data Architecture #Python #ADF (Azure Data Factory) #Azure Data Factory #Java #Scala #Data Quality #Data Pipeline #Apache Spark #Databricks #GitHub #Apache Kafka #Airflow #Computer Science #Docker #Grafana #ADLS (Azure Data Lake Storage) #Kafka (Apache Kafka) #Unix
Role description
Are you passionate about building scalable data solutions that power real-time insights and machine learning? Join our team as a Software Data Engineer, where you'll design and maintain cutting-edge data architectures on Azure using Databricks.
What Youβll Do:
β’ Design and optimize data pipelines and ETL/ELT workflows using Databricks on Azure.
β’ Build and manage modern data platforms including Data Lakes, Warehouses, and Lakehouses for both streaming and batch processing.
β’ Integrate large-scale datasets from diverse sources using Delta Lake and other formats.
β’ Develop feature stores and data prep workflows for ML applications.
β’ Ensure data quality through validation frameworks and governance standards.
β’ Monitor and optimize workflows for performance, cost-efficiency, and reliability.
What Weβre Looking For:
β’ 3+ years of hands-on experience with Databricks on Azure, Apache Spark (PySpark/Spark SQL), and Delta Lake.
β’ Strong knowledge of Azure Data Factory, ADLS, Synapse Analytics, Azure ML Studio.
β’ Proficiency in Python, SQL, Unix/Linux scripting; Java/Scala is a plus.
β’ Experience with streaming technologies like Apache Kafka, Azure Event Hubs, and Azure Stream Analytics.
β’ Familiarity with CI/CD tools (Azure DevOps, GitHub Actions), workflow orchestration (Airflow, ADF, Databricks Workflows).
Bonus Skills:
β’ Experience with ML frameworks (scikit-learn, TensorFlow, PyTorch) and feature engineering on Azure.
β’ Familiarity with LLM applications, prompt engineering, and AI agent frameworks (Azure OpenAI, Semantic Kernel).
β’ Containerization experience with Docker, AKS.
β’ Monitoring tools like Azure Monitor, Datadog, Grafana.
Education:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.
Are you passionate about building scalable data solutions that power real-time insights and machine learning? Join our team as a Software Data Engineer, where you'll design and maintain cutting-edge data architectures on Azure using Databricks.
What Youβll Do:
β’ Design and optimize data pipelines and ETL/ELT workflows using Databricks on Azure.
β’ Build and manage modern data platforms including Data Lakes, Warehouses, and Lakehouses for both streaming and batch processing.
β’ Integrate large-scale datasets from diverse sources using Delta Lake and other formats.
β’ Develop feature stores and data prep workflows for ML applications.
β’ Ensure data quality through validation frameworks and governance standards.
β’ Monitor and optimize workflows for performance, cost-efficiency, and reliability.
What Weβre Looking For:
β’ 3+ years of hands-on experience with Databricks on Azure, Apache Spark (PySpark/Spark SQL), and Delta Lake.
β’ Strong knowledge of Azure Data Factory, ADLS, Synapse Analytics, Azure ML Studio.
β’ Proficiency in Python, SQL, Unix/Linux scripting; Java/Scala is a plus.
β’ Experience with streaming technologies like Apache Kafka, Azure Event Hubs, and Azure Stream Analytics.
β’ Familiarity with CI/CD tools (Azure DevOps, GitHub Actions), workflow orchestration (Airflow, ADF, Databricks Workflows).
Bonus Skills:
β’ Experience with ML frameworks (scikit-learn, TensorFlow, PyTorch) and feature engineering on Azure.
β’ Familiarity with LLM applications, prompt engineering, and AI agent frameworks (Azure OpenAI, Semantic Kernel).
β’ Containerization experience with Docker, AKS.
β’ Monitoring tools like Azure Monitor, Datadog, Grafana.
Education:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.






