Mindlance

Lead Data Architect / Data Analyst (Python, SQL, GenAI)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Architect / Data Analyst with 12+ months contract, offering a competitive pay rate. Candidates must have 5-10+ years in data engineering/analysis, expertise in Python, SQL, and GenAI, and experience in the banking/financial industry.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
880
-
🗓️ - Date
March 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #GIT #GitLab #JSON (JavaScript Object Notation) #Leadership #Docker #SQL Server #Data Science #Automation #XML (eXtensible Markup Language) #Data Lake #RDBMS (Relational Database Management System) #SQL Queries #Storage #Spark (Apache Spark) #Kubernetes #Azure #Java #Python #ML (Machine Learning) #Datasets #Scala #Anomaly Detection #Cloud #Clustering #GitHub #Microsoft SQL Server #Stories #Documentation #AWS (Amazon Web Services) #SciPy #Observability #MS SQL (Microsoft SQL Server) #Matplotlib #Data Processing #Data Analysis #Deployment #Kafka (Apache Kafka) #Data Quality #Data Storage #Data Engineering #Data Triage #Data Architecture #GCP (Google Cloud Platform) #Pandas #SQL (Structured Query Language) #NumPy #Airflow #PySpark #Libraries #Microsoft SQL
Role description
Please find details for this position below: Client: Banking/Financial Industry Title: Lead Data Architect / Lead Data Analyst (Python, SQL, GenAI) Location: Charlotte, NC/Boston, MA/Dallas, TX/Irving, TX – Hybrid Roles Duration: 12+ Month (s) Extend or Convert based on performances Job Descriptions: Required Qualifications: • Bachelor’s in CS, Engineering, Math, or equivalent practical experience. • Typically 5–10+ years of combined data engineering/analysis experience (flexible with demonstrated impact and portfolio). • Engineer to provide deep technical leadership across mission critical platforms. • You will design and deliver scalable services in Python and Java on Red Hat OpenShift (OCP) and cloud, while serving as a hands-on expert in GenAI (Gemini/GPT), LLM evaluation, and agentic frameworks. • You’ll set the bar for architecture, SDLC excellence, and CI/CD automation, and mentor engineers to raise the craft across teams. Job Description – Data & ML Engineer: • We are seeking a experienced engineer (>10 years) who can independently analyze data given a problem statement and translate insights into production ready solutions. • Key Responsibilities • Perform data analysis and exploration using SQL and statistical techniques to solve business problems. • Design, develop, and implement solutions using Python or Java, leveraging libraries such as NumPy, SciPy, Matplotlib, and Scikit learn. • Build and evaluate machine learning models including Random Forest and XGBoost. • Apply AI assisted techniques by crafting effective prompts using Gemini models to accelerate data analysis, feature exploration, and insight generation. • Communicate findings clearly and partner with engineering and business teams to drive outcomes. Required Skills: • Strong SQL and data analysis skills. • Proficiency in Python or Java for data science and ML workloads. • Hands on experience with ML frameworks and model development. • Ability to work independently end to end from problem definition to solution delivery. • Experience using generative AI models to augment analytical workflows. Functional Title: Senior Data Engineer / Data Analyst (Python, SQL, GenAI) Business Area: Payments & Data Platforms Required Qualifications (Must-Have) • We’re specifically seeking someone strong in both data engineering and data analysis, who can code at a senior level. • Expert Python for data engineering & analysis (pandas, PySpark or similar, modular design, testing). • Advanced SQL (analytical window functions, performance optimization, CTEs, partitioning, large-scale joins). • Java experience for service implementations (APIs, data services, utilities), with strong SDLC discipline. • Proven with large datasets: profiling, cleaning, deduping, and synthesizing insights; comfort with semi-structured data (XML/JSON). • Data lake / warehouse experience (e.g., parquet, object storage, lakehouse patterns). • Hands-on CI/CD (Git, pipelines, build/test/release automation) and containerized deployments (Docker/K8s; OpenShift/OCP highly preferred). • Independent problem solver: break down ambiguous data issues, form hypotheses, validate with code, and communicate outcomes clearly. • Practical GenAI usage: ability to craft prompts and evaluate LLM outputs for data triage, test case generation, and analysis acceleration; disciplined about validation and bias/error checking. • Foundational ML knowledge: familiarity with applying models/techniques relevant to data quality (e.g., clustering, similarity, dedup/record linkage, anomaly detection)—you know when and how to apply them. Preferred Qualifications (Nice-to-Have) • Payments domain (especially wire payments)—schemas, statuses, exceptions, reconciliation. • Experience with streaming (Kafka), workflow/orchestration (Airflow), and feature engineering for ML. • LLM evaluation methods, agentic frameworks, and prompt chaining; experience balancing precision/recall for operational use-cases. • Performance tuning across Python/SQL/Java and data storage formats. • Familiarity with Microsoft SQL Server (or similar enterprise RDBMS). • Experience hardening solutions for auditability, lineage, and data quality (DQ frameworks, profiling at ingest). Day-to-Day Responsibilities • Translate backlog stories into technical plans; estimate, design, implement, test, and release. • Build and optimize Python-based data processing jobs and SQL queries to support analytics and test-case derivation. • Parse and normalize XML payment objects; implement dedup logic to identify unique records; generate high-coverage test cases. • Create internal tools/scripts to improve developer productivity and data observability. • Use GenAI (Gemini/GPT) responsibly: write prompts, evaluate outputs, and integrate where it truly adds value—always with validation. • Lead by example on code quality, documentation, and incident-free deployments. Tech Stack • Python, SQL, Java · Red Hat OpenShift (OCP), Kubernetes, Docker · Object Storage/Lakehouse (Parquet/Delta) · Airflow (or similar) · GitHub/GitLab CI · Observability (logs/metrics/traces) · XML/JSON parsing · [Cloud: AWS/Azure/GCP—specify] EEO: Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.