Korn Ferry

Software/ Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Software/Data Engineer with 3+ years of experience, focusing on Databricks, PySpark, and Python. It offers a remote contract with a competitive pay rate. Knowledge of Azure cloud services and data engineering practices is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
January 16, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boca Raton, FL
-
🧠 - Skills detailed
#Microsoft Azure #Azure Databricks #Databricks #Data Pipeline #ADLS (Azure Data Lake Storage) #Azure Blob Storage #Data Engineering #Infrastructure as Code (IaC) #Python #Scala #Data Processing #Terraform #PySpark #Spark SQL #SQL (Structured Query Language) #GIT #Delta Lake #Spark (Apache Spark) #Azure #Version Control #Azure ADLS (Azure Data Lake Storage) #Azure cloud #Storage #Data Lake #Cloud #Programming
Role description
We have partnered with our client in their search for a Software Engineer to support and develop our Databricks-based data platform during the development phase. This role focuses on building, improving, and validating reliability patterns early, rather than production firefighting. The ideal candidate is hands-on with Databricks, PySpark, Python, and has working knowledge of Azure cloud services. This role partners closely with Data Engineering teams to ensure solutions are scalable, efficient, and production-ready. Responsibilities • • Develop and support Databricks notebooks, jobs, and workflows • Write and optimize PySpark and Python code for data processing • Assist in designing scalable and reliable data pipelines • Apply Spark best practices (partitioning, caching, joins, file sizing) • Work with Delta Lake tables and data models • Perform data validation and quality checks during development • Support cluster configuration and sizing for development workloads • Identify performance bottlenecks early and recommend improvements • Collaborate with Data Engineers to prepare solutions for future production rollout • Document development standards, patterns, and best practices Skills Required Databricks & Spark • Hands-on experience with Databricks • Strong knowledge of PySpark and Spark fundamentals • Experience working with Delta Lake • Understanding of Spark execution concepts (jobs, stages, tasks) Programming • Strong Python development skills • Ability to write clean, modular, and reusable code Cloud & Azure (Preferred) • Experience working in Microsoft Azure • Familiarity with: • Azure Databricks • Azure Data Lake Storage (ADLS) • Azure Blob Storage • Basic understanding of cloud resource usage and cost awareness Development Practices • Experience using Git and version control • Familiarity with Databricks Repos or similar workflows • Ability to perform basic testing and validation of data pipelines Nice to Have • Experience with Spark SQL • Exposure to CI/CD pipelines for data platforms • Knowledge of infrastructure as code (Terraform, ARM) • Familiarity with Azure Monitor or Log Analytics • Interest in transitioning development work toward production reliability Education & Work Experience • 3+ years of hands-on experience: • Bachelor's degree required, (Masters preferred) Title Software Engineer Location Remote est Client Industry Legal and Professional About Korn Ferry Korn Ferry unleashes potential in people, teams, and organizations. We work with our clients to design optimal organization structures, roles, and responsibilities. We help them hire the right people and advise them on how to reward and motivate their workforce while developing professionals as they navigate and advance their careers. To learn more, please visit Korn Ferry at www.Kornferry.com