Lorven Technologies Inc.

Senior Engineer (Agentic + Data/ML)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Engineer (Agentic + Data/ML) on a long-term contract, 100% remote, with a pay rate of "unknown." Requires 5-10+ years in software/data/ML engineering, strong Python skills, and experience with AWS orchestration systems like Dagster.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Anomaly Detection #AWS (Amazon Web Services) #Monitoring #GIT #SonarQube #Batch #Data Quality #Public Cloud #Data Modeling #Data Pipeline #Cloud #Classification #Lambda (AWS Lambda) #ML (Machine Learning) #Airflow #Python #S3 (Amazon Simple Storage Service) #Migration #API (Application Programming Interface) #Jira #Observability #Metadata
Role description
Job Title: Senior Engineer (Agentic + Data/ML) Location: 100% remote Job Type: Long term contract Project description • You will join a core engineering team within one of the leading U.S. video‑content providers, supporting hundreds of distributed software engineering teams and operating across a large‑scale infrastructure that spans internal data centers, broadcast facilities, and public cloud (AWS). • The project focuses on building agentic retrieval workflows, data pipelines, and ML/LLM‑driven scoring systems that enhance engineering productivity and code‑quality metrics. • You will design retrieval agents, optimize LLM prompting strategies, build robust orchestration, and integrate backend services that process PR, commit, and code‑quality metadata at scale. Target Background: • 5-10+ years as a software/data/ML engineer. • Strong Python expertise and familiarity with modern data/ML tooling. • Proven delivery of production-grade orchestration systems (Dagster/Airflow) on AWS. • Practical experience with prompt/LLM optimization (DSPy preferred). • Comfortable working with Git/PR metadata, code diffs, backfills, and performance‑critical data pipelines. Responsibilities • CES improvements: PR-level scoring, historical context, context-retrieval agent, automated score correction, training data prep from feedback. • Data/PR tracking: ingest PRs, link to commits/branches, track merges to main, backfills. • Orchestration & reliability: Dagster migration, retries, scheduling, monitoring, data quality alerts. • LLM/prompt optimization: DSPy-based prompt tuning, eval set creation, feedback-driven corrections, prompt cost/quality tradeoffs. • Metadata & classification: role/work-type classification, epic linking improvements, filters/explorer features. Skills Must have • Agentic coding: build retrieval agents that pull code context/diffs/history to improve scoring (aligns with CES Phase 3 and quality metric work). • Data pipelines: Dagster (preferred) or equivalent orchestrator; AWS-native pipelines (ECS/EventBridge/Lambda/S3/Glue experience a plus); robust backfills and retries. • LLM/Prompting: DSPy or similar prompt/policy optimization; prompt chaining; context packing; eval design and execution. • Data modeling & analytics: PR/commit linking, branch/main tracking, epic linking; quality/effort scoring signals; anomaly detection for data quality. • Backend integration: building services that ingest PR/commit metadata, compute CES/quality metrics, and expose them to UI/API. • Eval/feedback loops: design and run eval sets; collect user feedback; close the loop with automated/manual score correction. Nice to have • Experience with code-scoring/effort or quality metrics; SonarQube integration. • Graph-ish linking across artifacts (Jira/PR/commit/branch) for epic/work-type classification. • Experience tuning performance/cost for LLM calls (OSS LLM evals, batching, caching). • Observability for pipelines (metrics/traces/logs) and data quality alerting.