

E-Solutions
Cloud Data Developer with Strong Scala
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Developer with strong Scala skills, located onsite in Iselin, NJ. Contract length is unspecified, with a focus on Azure, Databricks, PostgreSQL, and Elasticsearch. Key skills include CI/CD practices and agile collaboration. Certifications preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Data Lake #Code Reviews #Azure Databricks #Data Pipeline #Cloud #GitLab #Kubernetes #Data Engineering #Compliance #PostgreSQL #Agile #Data Governance #Programming #Microsoft Azure #Azure #Azure DevOps #GitHub #Database Design #DevOps #Security #Databricks #Data Security #Elasticsearch #Spark (Apache Spark) #Apache Spark #Docker #Data Science #Azure cloud #ML (Machine Learning) #Scala
Role description
Role : Cloud Data Developer with Strong Scala
Location : Iselin, NJ Onsite role
Look for Posgres, Elastic Search, Scala, Databricks.
Experienced Cloud Data Developer with a strong background in Azure Cloud, Databricks, Elasticsearch, PostgreSQL, and Scala. Responsibilities include building scalable data pipelines, implementing CI/CD practices, and collaborating with cross-functional teams in agile environments. Familiarity with Quantexa. Ideal for roles focused on cloud-native data engineering and enterprise-scale data solutions.
Key Responsibilities:
• Design and implement scalable data pipelines using Azure Databricks and Scala.
• Develop and optimize queries and data models in PostgreSQL and Elasticsearch.
• Collaborate with DevOps teams to build and maintain CI/CD pipelines for data applications.
• Monitor and troubleshoot data workflows and cloud infrastructure.
• Work closely with data scientists, analysts, and other developers to deliver high-quality data solutions.
• Ensure data security, compliance, and best practices in cloud environments.
• Monitor and optimize application performance, reliability, and cost.
• Write clean, maintainable, and well-tested code following best practices.
• Troubleshoot and resolve issues across development, test, and production environments.
• Participate in agile ceremonies such as sprint planning, stand-ups, and retrospectives.
• Contribute to code reviews, knowledge sharing, and mentoring of junior developers.
• Communicate effectively across teams to align on project goals, timelines, and deliverables.
• Foster a collaborative team environment by promoting open communication and continuous improvement.
Required Skills & Qualifications:
• Strong experience with Microsoft Azure services (e.g., Data Lake, Azure Functions, Azure DevOps).
• Proficiency in Databricks and Apache Spark.
• Solid understanding of Elasticsearch for search and analytics.
• Experience with PostgreSQL database design and optimization.
• Proficient in Scala programming language.
• Hands-on experience with CI/CD tools (e.g., Azure DevOps, GitLab, GitHub).
• Experience with containerization tools like Docker and orchestration with Kubernetes.
• Familiarity with Agile development methodologies.
• Excellent communication and teamwork skills.
Preferred Qualifications:
• Knowledge of data governance and security best practices.
• Certifications in Azure or related technologies.
• Experience in designing data models and managing data pipelines at scale.
• Exposure to machine learning pipelines or data science workflows.
Role : Cloud Data Developer with Strong Scala
Location : Iselin, NJ Onsite role
Look for Posgres, Elastic Search, Scala, Databricks.
Experienced Cloud Data Developer with a strong background in Azure Cloud, Databricks, Elasticsearch, PostgreSQL, and Scala. Responsibilities include building scalable data pipelines, implementing CI/CD practices, and collaborating with cross-functional teams in agile environments. Familiarity with Quantexa. Ideal for roles focused on cloud-native data engineering and enterprise-scale data solutions.
Key Responsibilities:
• Design and implement scalable data pipelines using Azure Databricks and Scala.
• Develop and optimize queries and data models in PostgreSQL and Elasticsearch.
• Collaborate with DevOps teams to build and maintain CI/CD pipelines for data applications.
• Monitor and troubleshoot data workflows and cloud infrastructure.
• Work closely with data scientists, analysts, and other developers to deliver high-quality data solutions.
• Ensure data security, compliance, and best practices in cloud environments.
• Monitor and optimize application performance, reliability, and cost.
• Write clean, maintainable, and well-tested code following best practices.
• Troubleshoot and resolve issues across development, test, and production environments.
• Participate in agile ceremonies such as sprint planning, stand-ups, and retrospectives.
• Contribute to code reviews, knowledge sharing, and mentoring of junior developers.
• Communicate effectively across teams to align on project goals, timelines, and deliverables.
• Foster a collaborative team environment by promoting open communication and continuous improvement.
Required Skills & Qualifications:
• Strong experience with Microsoft Azure services (e.g., Data Lake, Azure Functions, Azure DevOps).
• Proficiency in Databricks and Apache Spark.
• Solid understanding of Elasticsearch for search and analytics.
• Experience with PostgreSQL database design and optimization.
• Proficient in Scala programming language.
• Hands-on experience with CI/CD tools (e.g., Azure DevOps, GitLab, GitHub).
• Experience with containerization tools like Docker and orchestration with Kubernetes.
• Familiarity with Agile development methodologies.
• Excellent communication and teamwork skills.
Preferred Qualifications:
• Knowledge of data governance and security best practices.
• Certifications in Azure or related technologies.
• Experience in designing data models and managing data pipelines at scale.
• Exposure to machine learning pipelines or data science workflows.