15+ Senior Backend Python Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a 15+ Senior Backend Python Developer on a contract basis, offering a competitive pay rate. Key skills include Python backend development, Databricks experience, and data engineering expertise. Strong knowledge of RESTful APIs and cloud integration is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Deployment #Databricks #"ETL (Extract #Transform #Load)" #Swagger #Django #Spark (Apache Spark) #ADF (Azure Data Factory) #Cloud #Docker #Flask #Scala #SQL (Structured Query Language) #Data Pipeline #Storage #Python #Azure #Azure Data Factory #Data Quality #Security #Automation #Data Engineering #Documentation #SQLAlchemy #Data Lake #PySpark #Programming #API (Application Programming Interface) #Delta Lake #Distributed Computing #pydantic #NoSQL #Schema Design #FastAPI #Logging #Monitoring #Data Modeling #Pytest #Data Processing
Role description
Role Summary Skills Python Backend Development: β€’ Strong expertise in Python 3.x, with a focus on backend systems. β€’ Experience with Python web frameworks (FastAPI, Flask, Django, or similar). β€’ Data validation and serialization using Pydantic. β€’ ORM experience (SQLAlchemy, SQLModel, or similar). β€’ RESTful API design, implementation, documentation (OpenAPI/Swagger). β€’ Unit, integration, and end-to-end testing of APIs (pytest, unittest). β€’ Security best practices (authentication, authorization, API security). β€’ Asynchronous programming with Python (async/await, asyncio). β€’ Performance optimization, caching strategies, and error handling. β€’ Experience with Docker and containerized backend deployments. Databricks & Data Engineering: β€’ Experience with Databricks for large-scale data processing and analytics. β€’ Proficient in writing and optimizing PySpark jobs/notebooks for ETL and data transformation. β€’ Strong understanding of distributed computing concepts. β€’ Working knowledge of data lake architectures and Delta Lake. β€’ Building scalable data pipelines using Azure Data Factory and Databricks. β€’ Automation of data quality checks, monitoring, and logging. β€’ Integration with cloud data sources (Azure Blob, Data Lake Storage, SQL/NoSQL DBs). β€’ Data modeling and schema design for analytical workloads. β€’ Experience with CI/CD for Databricks notebooks and jobs. Knowledge of workspace administration, cluster management, and job orchestration in Databricks Thanks and Best Regards