

Wells Fargo
Senior Data Ops Engineer (contract)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Ops Engineer on a 12-month contract, based in Irving, TX (hybrid). Key skills include AI tools, data engineering, and cloud solutions. Public cloud certifications are preferred. U.S. work authorization is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Irving, TX
-
🧠 - Skills detailed
#Big Data #Data Pipeline #Data Quality #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Data Engineering #Compliance #Data Lake #PySpark #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Azure #Data Management #Public Cloud #Consulting #Airflow #React #Langchain #Python #Data Lakehouse #Google Cloud Storage #Cloud #Spark (Apache Spark) #Metadata #Kafka (Apache Kafka) #Storage #BigQuery
Role description
Description
Title: Senior Data Ops Engineer
Location: Irving, TX
Alternative Location: Charlotte, NC, Chandler, AZ, Des Moines, IA, Minneapolis, MN
Duration: 12 months
Work Engagement: W2
Work Schedule: Hybrid 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Specialty Software Engineering. Review and analyze complex multi-faceted, larger scale, or longer-term Specialty Software Engineering challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors. Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables. Strategically collaborate and consult with client personnel. Required Qualifications: Specialty Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work or consulting experience, training, military experience, education.
Key Responsibilities:
Implement and operationalize modern AI-enabled data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities - data pipelines, data quality, metadata, data compliance, etc.
Work within a matrix org. with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
Key Requirements:
• Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
Demonstrable skills (recent) using AI tools such as LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and using MCP to build agent-based data capabilities
data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing
Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, Data Proc, Cloud Composer
Hands-on experience developing data flows using Kafka, Flink, and Spark streaming
Desired Qualifications:
Proven experience using AI to auto-generate data engineering related code, context engineering and prompt engineering
Deep background on cloud-based data lakes and warehouses, and automated data pipelines
Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
Web based UI development using React and Node JS is a plus
Description
Title: Senior Data Ops Engineer
Location: Irving, TX
Alternative Location: Charlotte, NC, Chandler, AZ, Des Moines, IA, Minneapolis, MN
Duration: 12 months
Work Engagement: W2
Work Schedule: Hybrid 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Specialty Software Engineering. Review and analyze complex multi-faceted, larger scale, or longer-term Specialty Software Engineering challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors. Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables. Strategically collaborate and consult with client personnel. Required Qualifications: Specialty Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work or consulting experience, training, military experience, education.
Key Responsibilities:
Implement and operationalize modern AI-enabled data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities - data pipelines, data quality, metadata, data compliance, etc.
Work within a matrix org. with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
Key Requirements:
• Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
Demonstrable skills (recent) using AI tools such as LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and using MCP to build agent-based data capabilities
data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing
Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, Data Proc, Cloud Composer
Hands-on experience developing data flows using Kafka, Flink, and Spark streaming
Desired Qualifications:
Proven experience using AI to auto-generate data engineering related code, context engineering and prompt engineering
Deep background on cloud-based data lakes and warehouses, and automated data pipelines
Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
Web based UI development using React and Node JS is a plus






