

Veritas Search Group
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer near Tustin, CA, with a contract length of "unknown" and a pay rate of "unknown." Candidates must have a Bachelor's degree, strong SQL skills, and experience with ETL/ELT development and cloud data platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date
May 8, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Tustin, CA
-
π§ - Skills detailed
#GIT #BI (Business Intelligence) #Data Mart #"ETL (Extract #Transform #Load)" #Data Warehouse #Databases #Java #JSON (JavaScript Object Notation) #SSRS (SQL Server Reporting Services) #Code Reviews #Data Lake #Tableau #Cloud #MS SQL (Microsoft SQL Server) #Automated Testing #GraphQL #Leadership #Security #Spark (Apache Spark) #Computer Science #SQL (Structured Query Language) #Snowflake #PySpark #Microsoft SQL #Microsoft SQL Server #ADLS (Azure Data Lake Storage) #BigQuery #Data Pipeline #Scala #Azure #Automation #Documentation #REST (Representational State Transfer) #SSAS (SQL Server Analysis Services) #Data Quality #Data Modeling #Python #ADF (Azure Data Factory) #XML (eXtensible Markup Language) #SQL Server #Microsoft Power BI #Data Governance #Data Engineering #Airflow #Databricks #dbt (data build tool)
Role description
This role requires candidates who are currently authorized to work in the U.S. without sponsorship, and C2C arrangements are not accepted. This role is onsite near Tustin, CA.
Please submit your resume and send to recruiters@vsearchgroup.com if you are interested. Thank you!
Overview
We are seeking Data Engineers at multiple levels to help design, build, and modernize scalable data platforms across enterprise applications. These roles focus on developing production-grade data pipelines, improving cloud-based data infrastructure, and supporting analytics, reporting, operational, and business data needs.
Senior-level candidates will lead architecture and technical direction, establish best practices, and mentor junior team members. Junior and mid-level candidates will support pipeline development, data transformation, platform improvements, and day-to-day data engineering work under senior technical guidance.
Key Responsibilities
β’ Design, build, and maintain scalable ETL/ELT data pipelines
β’ Support data lakes, data warehouses, data marts, and cloud-based data platforms
β’ Transform, model, and optimize data from enterprise and transactional sources
β’ Work with tools such as Azure, ADF, ADLS, Databricks, Snowflake, BigQuery, Airflow, DBT, or similar platforms
β’ Use SQL, Python, Java, Scala, PySpark, or similar technologies to build reliable data solutions
β’ Collaborate with business, analytics, engineering, and operations teams to gather requirements and deliver solutions
β’ Support data quality, performance tuning, automation, testing, and CI/CD practices
β’ Participate in prototyping, proof-of-concept development, release planning, and delivery estimation
β’ Use Git-based workflows, including branching, merging, code reviews, and reusable code practices
β’ Senior-level: provide technical leadership, architecture guidance, best practices, and mentorship
β’ Junior/Mid-level: support pipeline development, data modeling, troubleshooting, documentation, and platform modernization efforts
Required Qualifications
β’ Bachelorβs degree in Computer Science, Information Systems, Data Engineering, or a related field
β’ Experience in data engineering, backend engineering, software engineering, analytics engineering, or enterprise data platforms
β’ Strong SQL skills, including querying, data modeling, transformation, and/or performance tuning
β’ Experience with ETL/ELT development and scalable data pipelines
β’ Experience with cloud data platforms, data warehouses, data lakes, or relational databases
β’ Experience with at least one data engineering language such as Python, Java, Scala, or PySpark
β’ Familiarity with APIs and data formats such as REST, GraphQL, XML, and JSON
β’ Working knowledge of Git and modern software development workflows
β’ Strong problem-solving, communication, and collaboration skills
β’ Senior-level: 5+ years of related experience with architecture, performance tuning, technical design, and mentorship
β’ Junior/Mid-level: 2+ years of related experience with strong SQL/Python fundamentals and willingness to learn modern data platforms
Preferred Qualifications
β’ Experience with Microsoft SQL Server, SSAS, SSRS, Power BI, Tableau, or similar tools
β’ Experience with Spark, Databricks, Snowflake, BigQuery, Airflow, DBT, ADF, or ADLS
β’ Experience with CI/CD, automated testing, code reviews, and data platform modernization
β’ Understanding of data governance, data quality, security, or performance best practices
Work Environment
Employees located within a commutable distance of designated hub offices may be expected to work on-site several days per week, depending on business needs and company policy.
This role requires candidates who are currently authorized to work in the U.S. without sponsorship, and C2C arrangements are not accepted. This role is onsite near Tustin, CA.
Please submit your resume and send to recruiters@vsearchgroup.com if you are interested. Thank you!
Overview
We are seeking Data Engineers at multiple levels to help design, build, and modernize scalable data platforms across enterprise applications. These roles focus on developing production-grade data pipelines, improving cloud-based data infrastructure, and supporting analytics, reporting, operational, and business data needs.
Senior-level candidates will lead architecture and technical direction, establish best practices, and mentor junior team members. Junior and mid-level candidates will support pipeline development, data transformation, platform improvements, and day-to-day data engineering work under senior technical guidance.
Key Responsibilities
β’ Design, build, and maintain scalable ETL/ELT data pipelines
β’ Support data lakes, data warehouses, data marts, and cloud-based data platforms
β’ Transform, model, and optimize data from enterprise and transactional sources
β’ Work with tools such as Azure, ADF, ADLS, Databricks, Snowflake, BigQuery, Airflow, DBT, or similar platforms
β’ Use SQL, Python, Java, Scala, PySpark, or similar technologies to build reliable data solutions
β’ Collaborate with business, analytics, engineering, and operations teams to gather requirements and deliver solutions
β’ Support data quality, performance tuning, automation, testing, and CI/CD practices
β’ Participate in prototyping, proof-of-concept development, release planning, and delivery estimation
β’ Use Git-based workflows, including branching, merging, code reviews, and reusable code practices
β’ Senior-level: provide technical leadership, architecture guidance, best practices, and mentorship
β’ Junior/Mid-level: support pipeline development, data modeling, troubleshooting, documentation, and platform modernization efforts
Required Qualifications
β’ Bachelorβs degree in Computer Science, Information Systems, Data Engineering, or a related field
β’ Experience in data engineering, backend engineering, software engineering, analytics engineering, or enterprise data platforms
β’ Strong SQL skills, including querying, data modeling, transformation, and/or performance tuning
β’ Experience with ETL/ELT development and scalable data pipelines
β’ Experience with cloud data platforms, data warehouses, data lakes, or relational databases
β’ Experience with at least one data engineering language such as Python, Java, Scala, or PySpark
β’ Familiarity with APIs and data formats such as REST, GraphQL, XML, and JSON
β’ Working knowledge of Git and modern software development workflows
β’ Strong problem-solving, communication, and collaboration skills
β’ Senior-level: 5+ years of related experience with architecture, performance tuning, technical design, and mentorship
β’ Junior/Mid-level: 2+ years of related experience with strong SQL/Python fundamentals and willingness to learn modern data platforms
Preferred Qualifications
β’ Experience with Microsoft SQL Server, SSAS, SSRS, Power BI, Tableau, or similar tools
β’ Experience with Spark, Databricks, Snowflake, BigQuery, Airflow, DBT, ADF, or ADLS
β’ Experience with CI/CD, automated testing, code reviews, and data platform modernization
β’ Understanding of data governance, data quality, security, or performance best practices
Work Environment
Employees located within a commutable distance of designated hub offices may be expected to work on-site several days per week, depending on business needs and company policy.






