

Rivago Infotech Inc
Python Tech Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Tech Lead in Iselin, NJ (Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, Databricks, and experience in leading engineering teams and building large-scale data solutions.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 18, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Monitoring #Azure #Strategy #Agile #GCP (Google Cloud Platform) #Logging #FastAPI #MongoDB #AWS (Amazon Web Services) #Django #Scala #Streamlit #Code Reviews #Cloud #NoSQL #Leadership #Jenkins #PySpark #GitHub #Angular #Azure DevOps #Scrum #MySQL #Flask #Kanban #Spark SQL #Python #"ETL (Extract #Transform #Load)" #Databricks #SQL (Structured Query Language) #Automated Testing #Databases #Spark (Apache Spark) #Docker #GIT #DevOps #Data Architecture #Data Engineering #Data Processing #Observability #Microservices #Deployment #Data Pipeline #PostgreSQL
Role description
Role : Tech Lead (Python + Databricks+ Streamlit)
Location : Iselin, NJ (Hybrid)
Position Overview
We are seeking a highly skilled Tech Lead with strong hands-on expertise in Python, Databricks, and Angular(Good to have) to lead the design, development, and implementation of scalable data and application solutions. The ideal candidate will guide engineering teams, architect end-to-end systems, and collaborate across functions to deliver high-quality, enterprise-grade products.
Key Responsibilities
Technical Leadership
• Lead a cross-functional engineering team across backend, frontend, and data engineering streams.
• Own the full SDLC: architecture, design, development, code reviews, DevOps pipeline oversight, and production deployment.
• Provide technical direction, mentor developers, enforce coding best practices, and promote engineering excellence.
• Work closely with product owners, architects, and stakeholders to define technical strategy and roadmap.
Backend Engineering (Python)
• Design and build robust microservices, APIs, and data processing frameworks using Python.
• Develop scalable ETL/ELT workflows and backend logic aligned with enterprise data standards.
• Implement best practices in error handling, logging, performance tuning, and automated testing.
Data Engineering (Databricks)
• Build large-scale data pipelines and transformations using Databricks (PySpark/Spark SQL).
• Optimize data workflows for cost, performance, and reliability.
• Collaborate with data architects to define data models, governance, and quality frameworks.
• Integrate Databricks with cloud services (Azure/AWS) for end-to-end data solutions.
Cloud & DevOps
• Work with CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc.) for automated builds and deployments.
• Ensure seamless deployment and monitoring of applications in cloud environments.
• Drive improvements in observability, reliability, and system health using monitoring tools.
Collaboration & Stakeholder Management
• Partner with business teams to translate requirements into scalable technology solutions.
• Estimate, plan, and track project deliverables.
• Guide teams in Agile development practices (Scrum/Kanban).
Required Skills & Qualifications
Technical Skills
• Strong hands-on experience with Python (Flask/FastAPI/Django desirable).
• Advanced knowledge of Databricks, PySpark, Spark SQL.
• Experience building large-scale distributed systems.
• Strong understanding of cloud platforms (Azure preferred, AWS/GCP acceptable).
• Familiarity with SQL/NoSQL databases (PostgreSQL, MySQL, Cosmos, MongoDB, etc.).
• Experience with Git, CI/CD pipelines, Docker, and containerized deployments.
Role : Tech Lead (Python + Databricks+ Streamlit)
Location : Iselin, NJ (Hybrid)
Position Overview
We are seeking a highly skilled Tech Lead with strong hands-on expertise in Python, Databricks, and Angular(Good to have) to lead the design, development, and implementation of scalable data and application solutions. The ideal candidate will guide engineering teams, architect end-to-end systems, and collaborate across functions to deliver high-quality, enterprise-grade products.
Key Responsibilities
Technical Leadership
• Lead a cross-functional engineering team across backend, frontend, and data engineering streams.
• Own the full SDLC: architecture, design, development, code reviews, DevOps pipeline oversight, and production deployment.
• Provide technical direction, mentor developers, enforce coding best practices, and promote engineering excellence.
• Work closely with product owners, architects, and stakeholders to define technical strategy and roadmap.
Backend Engineering (Python)
• Design and build robust microservices, APIs, and data processing frameworks using Python.
• Develop scalable ETL/ELT workflows and backend logic aligned with enterprise data standards.
• Implement best practices in error handling, logging, performance tuning, and automated testing.
Data Engineering (Databricks)
• Build large-scale data pipelines and transformations using Databricks (PySpark/Spark SQL).
• Optimize data workflows for cost, performance, and reliability.
• Collaborate with data architects to define data models, governance, and quality frameworks.
• Integrate Databricks with cloud services (Azure/AWS) for end-to-end data solutions.
Cloud & DevOps
• Work with CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc.) for automated builds and deployments.
• Ensure seamless deployment and monitoring of applications in cloud environments.
• Drive improvements in observability, reliability, and system health using monitoring tools.
Collaboration & Stakeholder Management
• Partner with business teams to translate requirements into scalable technology solutions.
• Estimate, plan, and track project deliverables.
• Guide teams in Agile development practices (Scrum/Kanban).
Required Skills & Qualifications
Technical Skills
• Strong hands-on experience with Python (Flask/FastAPI/Django desirable).
• Advanced knowledge of Databricks, PySpark, Spark SQL.
• Experience building large-scale distributed systems.
• Strong understanding of cloud platforms (Azure preferred, AWS/GCP acceptable).
• Familiarity with SQL/NoSQL databases (PostgreSQL, MySQL, Cosmos, MongoDB, etc.).
• Experience with Git, CI/CD pipelines, Docker, and containerized deployments.






