

Voto Consulting LLC
Cloud Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Cloud Data Platform Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires 5+ years in data engineering, expertise in AI tools, and cloud solutions, with preferred certifications in GCP or Azure. Work location is hybrid, 3 days/week in-office.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 24, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Irving, TX
-
π§ - Skills detailed
#Big Data #Data Pipeline #Data Quality #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Data Engineering #Compliance #Data Lake #PySpark #"ETL (Extract #Transform #Load)" #Azure cloud #Agile #GCP (Google Cloud Platform) #Azure #Data Management #Public Cloud #Airflow #React #Langchain #Python #Security #Data Lakehouse #Google Cloud Storage #Automation #Cloud #Migration #Spark (Apache Spark) #Metadata #Data Processing #Kafka (Apache Kafka) #Storage #BigQuery
Role description
ONLY W2
We are currently seeking a Lead Cloud Data Platform Engineer to apply deep technical skills to create data products, develop AI-based automation tools, and build a world-class cloud analytics capability for Wells Fargoβs Cyber Security Data Ecosystem on a Hybrid cloud.
About the Role
The successful candidate will continually innovate and pioneer the use of new technologies, and drive adoption of these amongst a team of talented data engineers. You will be an integral part of our migration from our on-premises systems to the Google Azure cloud. You will be a vital member of an agile team helping to lead the design and hands-on implementation of modern data processing capabilities. This is a visible role that allows you to share your knowledge and skills with other developers and product teams in a collaborative environment.
Responsibilities:
β’ Implement and operationalize modern AI-enabled data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
β’ Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities - data pipelines, data quality, metadata, data compliance, etc.
β’ Work within a matrix org. with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
Qualifications:
β’ Demonstrable skills (recent) using AI tools such as LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and using MCP to build agent-based data capabilities
β’ 5+ years of experience in data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing
β’ 3+ years of experience with Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, Data Proc, Cloud Composer
β’ Hands-on experience developing data flows using Kafka, Flink, and Spark streaming
Required Skills:
β’ Proven experience using AI to auto-generate data engineering related code, context engineering and prompt engineering
β’ Deep background on cloud-based data lakes and warehouses, and automated data pipelines
Preferred Skills:
β’ Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
β’ Web based UI development using React and Node JS is a plus
W2/// Hybrid (3 days/week office)
ONLY W2
We are currently seeking a Lead Cloud Data Platform Engineer to apply deep technical skills to create data products, develop AI-based automation tools, and build a world-class cloud analytics capability for Wells Fargoβs Cyber Security Data Ecosystem on a Hybrid cloud.
About the Role
The successful candidate will continually innovate and pioneer the use of new technologies, and drive adoption of these amongst a team of talented data engineers. You will be an integral part of our migration from our on-premises systems to the Google Azure cloud. You will be a vital member of an agile team helping to lead the design and hands-on implementation of modern data processing capabilities. This is a visible role that allows you to share your knowledge and skills with other developers and product teams in a collaborative environment.
Responsibilities:
β’ Implement and operationalize modern AI-enabled data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
β’ Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities - data pipelines, data quality, metadata, data compliance, etc.
β’ Work within a matrix org. with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
Qualifications:
β’ Demonstrable skills (recent) using AI tools such as LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and using MCP to build agent-based data capabilities
β’ 5+ years of experience in data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing
β’ 3+ years of experience with Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, Data Proc, Cloud Composer
β’ Hands-on experience developing data flows using Kafka, Flink, and Spark streaming
Required Skills:
β’ Proven experience using AI to auto-generate data engineering related code, context engineering and prompt engineering
β’ Deep background on cloud-based data lakes and warehouses, and automated data pipelines
Preferred Skills:
β’ Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
β’ Web based UI development using React and Node JS is a plus
W2/// Hybrid (3 days/week office)






