

Legal Builders
Part-Time Developer - AI Workflow Systems
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Part-Time Developer - AI Workflow Systems, offering 10-15 hours per week at a competitive pay rate. Required skills include Python, API integrations, debugging, and AI-assisted development. Experience with Azure and compliance in sensitive data environments is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 9, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Security #PostgreSQL #GIT #YAML (YAML Ain't Markup Language) #ChatGPT #GitHub #Databases #SQL (Structured Query Language) #Data Processing #Cloud #Pytest #AI (Artificial Intelligence) #API (Application Programming Interface) #Microsoft Azure #Langchain #Observability #Storage #Debugging #Python #Vault #KQL (Kusto Query Language) #Azure #JSON (JavaScript Object Notation) #Terraform #Model Evaluation #SQLAlchemy #Compliance
Role description
About the Role
We are a small team building an AI-driven workflow system that accelerates human review of high-volume, unstructured legal documents.
We are expanding the engineering team and are seeking a part-time developer to support ongoing development, integrations, operational improvements, and security/compliance work. This role works closely with our Cloud Engineer / AI Operations lead.
This is an ideal role for a practical, methodical problem-solver who can navigate an existing codebase, diagnose issues confidently, and ship reliable improvements in a fast-moving environment. We are looking for someone with solid foundational engineering instincts, a genuine curiosity for how things work, and the ability to leverage AI coding tools effectively to punch above their weight.
Responsibilities
β’ Build and maintain API integrations within the workflow system (Zendesk, Google Sheets, enrichment services)
β’ Extend and improve data processing pipelines across the queue-driven Azure Functions architecture
β’ Diagnose and resolve issues across the pipeline β tracing failures through logs, queues, and service boundaries
β’ Work with LLM-powered classifiers and enrichment services, including prompt engineering and model evaluation
β’ Contribute to the YAML-driven workflow engine, JSONLogic rules, and response pipeline
β’ Help improve reliability, observability, and operational tooling (App Insights)
β’ Contribute to secure coding practices, safe handling of sensitive legal data, and implementation of SOC 2 technical controls
β’ Write and maintain tests (pytest), participate in code review, and follow pre-commit quality gates (black, isort, flake8, etc.)
β’ Document implementation details and operational procedures
Core Skills & Qualities
Debugging & Problem-Solving (Essential)
You should be able to read a stack trace, reason through a failure, and isolate root cause without being told where to look. We want someone who enjoys the detective work of figuring out why something broke and what the right fix actually is.
β’ Methodical approach to diagnosing failures across multi-service, async pipelines
β’ Ability to read unfamiliar code and form accurate mental models quickly
β’ Comfort working in an active production codebase with real constraints
β’ Strong written communication around what you found and what you changed
AI-Assisted Development (Essential)
AI coding tools (Cursor in particular) are central to how we work. We expect you to be genuinely skilled at this.Β Not just using autocomplete, but knowing how to prompt effectively to get accurate, context-aware results. This means understanding when to trust the model, when to push back on it, and how to construct prompts that elicit insightful rather than generic responses.
β’ Experienced with Cursor, GitHub Copilot, or equivalent AI coding assistants
β’ Ability to craft precise, context-rich prompts that produce reliable, reasoned outputs
β’ Knows how to verify AI-generated code, and reviews critically, doesnβt accept blindly
β’ Uses LLM chat (Claude, ChatGPT, etc.) as a reasoning partner, not just a code generator
β’ Security-aware: understands the risks of pasting sensitive code or data into AI tools
Technical Foundation
β’ Proficiency in Python (3.10+) with strong typing, clean architecture, and modular design
β’ Experience building or maintaining API integrations and working with JSON, queues, and async workflows
β’ Familiarity with SQL databases (PostgreSQL preferred) and ORM patterns (SQLAlchemy)
β’ Comfortable with Git workflows, pull requests, and CI/CD pipelines
Preferred Experience (Not Required)
β’ Microsoft Azure services: Functions, Storage Queues, Key Vault, Azure OpenAI Service
β’ OpenAI / Azure OpenAI / LangChain for LLM integration and prompt engineering
β’ Terraform or other infrastructure-as-code tooling
β’ Zendesk APIs or ticketing system integrations
β’ Observability tools: Azure App Insights, KQL
β’ Pre-commit hooks and linting pipelines (black, flake8, automated security scanning)
β’ SOC 2 compliance efforts: Β implementing technical controls or supporting audit preparation
β’ Systems handling sensitive or regulated data (legal, healthcare, financial)
β’ Vue.js / Nuxt 3 for dashboard or UI contributions
Work Structure
β’ Part-time engagement (approximately 10β15 hours per week)
β’ Remote with flexible schedule and asynchronous collaboration
β’ Work is coordinated with the lead Cloud / AI Systems Engineer
About the Role
We are a small team building an AI-driven workflow system that accelerates human review of high-volume, unstructured legal documents.
We are expanding the engineering team and are seeking a part-time developer to support ongoing development, integrations, operational improvements, and security/compliance work. This role works closely with our Cloud Engineer / AI Operations lead.
This is an ideal role for a practical, methodical problem-solver who can navigate an existing codebase, diagnose issues confidently, and ship reliable improvements in a fast-moving environment. We are looking for someone with solid foundational engineering instincts, a genuine curiosity for how things work, and the ability to leverage AI coding tools effectively to punch above their weight.
Responsibilities
β’ Build and maintain API integrations within the workflow system (Zendesk, Google Sheets, enrichment services)
β’ Extend and improve data processing pipelines across the queue-driven Azure Functions architecture
β’ Diagnose and resolve issues across the pipeline β tracing failures through logs, queues, and service boundaries
β’ Work with LLM-powered classifiers and enrichment services, including prompt engineering and model evaluation
β’ Contribute to the YAML-driven workflow engine, JSONLogic rules, and response pipeline
β’ Help improve reliability, observability, and operational tooling (App Insights)
β’ Contribute to secure coding practices, safe handling of sensitive legal data, and implementation of SOC 2 technical controls
β’ Write and maintain tests (pytest), participate in code review, and follow pre-commit quality gates (black, isort, flake8, etc.)
β’ Document implementation details and operational procedures
Core Skills & Qualities
Debugging & Problem-Solving (Essential)
You should be able to read a stack trace, reason through a failure, and isolate root cause without being told where to look. We want someone who enjoys the detective work of figuring out why something broke and what the right fix actually is.
β’ Methodical approach to diagnosing failures across multi-service, async pipelines
β’ Ability to read unfamiliar code and form accurate mental models quickly
β’ Comfort working in an active production codebase with real constraints
β’ Strong written communication around what you found and what you changed
AI-Assisted Development (Essential)
AI coding tools (Cursor in particular) are central to how we work. We expect you to be genuinely skilled at this.Β Not just using autocomplete, but knowing how to prompt effectively to get accurate, context-aware results. This means understanding when to trust the model, when to push back on it, and how to construct prompts that elicit insightful rather than generic responses.
β’ Experienced with Cursor, GitHub Copilot, or equivalent AI coding assistants
β’ Ability to craft precise, context-rich prompts that produce reliable, reasoned outputs
β’ Knows how to verify AI-generated code, and reviews critically, doesnβt accept blindly
β’ Uses LLM chat (Claude, ChatGPT, etc.) as a reasoning partner, not just a code generator
β’ Security-aware: understands the risks of pasting sensitive code or data into AI tools
Technical Foundation
β’ Proficiency in Python (3.10+) with strong typing, clean architecture, and modular design
β’ Experience building or maintaining API integrations and working with JSON, queues, and async workflows
β’ Familiarity with SQL databases (PostgreSQL preferred) and ORM patterns (SQLAlchemy)
β’ Comfortable with Git workflows, pull requests, and CI/CD pipelines
Preferred Experience (Not Required)
β’ Microsoft Azure services: Functions, Storage Queues, Key Vault, Azure OpenAI Service
β’ OpenAI / Azure OpenAI / LangChain for LLM integration and prompt engineering
β’ Terraform or other infrastructure-as-code tooling
β’ Zendesk APIs or ticketing system integrations
β’ Observability tools: Azure App Insights, KQL
β’ Pre-commit hooks and linting pipelines (black, flake8, automated security scanning)
β’ SOC 2 compliance efforts: Β implementing technical controls or supporting audit preparation
β’ Systems handling sensitive or regulated data (legal, healthcare, financial)
β’ Vue.js / Nuxt 3 for dashboard or UI contributions
Work Structure
β’ Part-time engagement (approximately 10β15 hours per week)
β’ Remote with flexible schedule and asynchronous collaboration
β’ Work is coordinated with the lead Cloud / AI Systems Engineer






