Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a B2B contract, 100% remote, focusing on building APIs and scalable solutions on Azure. Key skills include Databricks, Python, API development, and DevOps practices. Experience with ML model deployment is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Python #Data Science #Azure #Terraform #Agile #Containers #Monitoring #DevOps #GitHub #Langchain #Spark (Apache Spark) #Storage #Data Engineering #SQL (Structured Query Language) #PySpark #FastAPI #Microservices #Model Deployment #Scala #Debugging #GIT #Django #Delta Lake #Automation #Logging #API (Application Programming Interface) #Deployment #Docker #Flask #ML (Machine Learning) #Data Processing #Azure DevOps #Databricks #Scrum #AI (Artificial Intelligence)
Role description
Your Responsibilities β€’ Build and maintain APIs and microservices (FastAPI/Django/Flask) supporting data and AI workflows β€’ Design and implement scalable solutions on Azure (Apps, Containers, Storage, SQL) β€’ Work with Databricks (PySpark, Delta Lake, Delta Live Tables) to process and integrate data β€’ Implement Git-based workflows, testing, and CI/CD automation (GitHub Actions/Azure DevOps) β€’ Apply DevOps-first practices with automation and deployment using Databricks Asset Bundles (DAB) β€’ Ensure clean code, testing, and maintain high engineering standards β€’ Set up monitoring, logging, and alerting (Azure Monitor, Log Analytics, Cost Management) β€’ Contribute to solution architecture and propose improvements β€’ Collaborate with Data Scientists to deploy and maintain AI models in production (MLOps experience is a plus, not a must) Your Manager Software & Data Development is a team of experts who provide comprehensive support for software development – from designing and implementing front-end and back-end layers to building and maintaining data-driven solutions. The team is constantly growing and currently consists of 80 specialists. Manager: Łukasz CzerwiΕ„ski We offer β€’ B2B contract β€’ 100% remote work β€’ Wide range of projects (internal and international) β€’ Dedicated certification budget β€’ Annual evaluation meetings to define an individual development path β€’ Benefits package β€’ Integration trips Requirements β€’ Solid knowledge of the Databricks ecosystem: architecture, Delta Lake, and Delta Live Tables β€’ Strong Python skills (OOP, testing, clean code) with experience in advanced data processing (preferably PySpark) β€’ Hands-on experience with API development and integration using FastAPI (or Flask/Django) β€’ Practical experience with Azure services: Apps, Containers, Storage, SQL β€’ Familiarity with DevOps practices: automation-first mindset, CI/CD pipelines, and deployment automation (DAB) β€’ Experience with Git, agile teams, and working in Scrum-based environments β€’ Knowledge of Docker, Terraform/ARM is a plus β€’ Strong problem-solving skills, ownership mindset, and ability to collaborate across teams Nice to Have β€’ Basic understanding of Generative AI and Agentic AI use cases β€’ Experience in debugging and optimizing Spark jobs (Photon, Catalyst, query plans) β€’ Experience with ML model deployment, GenAI tools (LangChain, vector DBs, ML pipelines)