VARITE INC

Data Engineer - III (USC/GC on W2 Only)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - III, lasting 12 months, with a pay rate of $85-120/hr on W2. It requires 2+ years of experience with Databricks, Collibra, Python, and AWS, and is located in San Francisco, CA.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
960
-
πŸ—“οΈ - Date
December 6, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Python #Automation #Security #PySpark #Collibra #S3 (Amazon Simple Storage Service) #Data Pipeline #Strategy #Data Science #Airflow #Cloud #Consulting #Databricks #Snowflake #Unit Testing #Redshift #Jupyter #Scala #Agile #NoSQL #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Data Warehouse #Spark (Apache Spark) #Monitoring #Computer Science #Data Engineering #Big Data #Databases
Role description
VARITE is looking for a qualified Data Engineer - III for one of its clients. WHAT THE CLIENT DOES? A U.S. based regional bank and financial services company that provides a wide range of banking products and services, including personal banking, business banking, mortgages, wealth management, and investment services. WHAT WE DO? Established in the Year 2000, VARITE is an award-winning minority business enterprise providing global consulting & staffing services to Fortune 1000 companies and government agencies. With 850+ global consultants, VARITE is committed to delivering excellence to its customers by leveraging its global experience and expertise in providing comprehensive scientific, engineering, technical, and non-technical staff augmentation and talent acquisition services. β€’ β€’ Note: We only work on W2, we do not work on C2C/1099, we do not provide Visa sponsorship, kindly refrain from applying if you do not meet this criteria, thank you for understanding! β€’ β€’ HERE'S WHAT YOU’LL DO: Job Title: Data Engineer - III Duration: 12 Months Pay Rate: $85- 120/hr on W2 Work Mode: Hybrid role Locations: San Francisco, CA 94105 Pay rate: USD 85.00 /hr - USD 120.00 /hr Job Description: Role Overview: β€’ As a Data Engineer, this CW will be responsible for collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable insights. They will work across multiple platforms to ensure that data pipelines are scalable, repeatable, and secure, capable of serving multiple users. Key Responsibilities: β€’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform. β€’ Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team. β€’ Deliver high-quality data products and services following Safe Agile Practices. β€’ Proactively identify and resolve issues with data pipelines and analytical data stores. β€’ Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability. β€’ Employ a security-first, testing, and automation strategy, adhering to data engineering best practices. β€’ Collaborate with cross-functional teams, including product management, data scientists, analysts, and business stakeholders, to understand their data requirements and provide them with the necessary infrastructure and tools. β€’ Keep up with the latest trends and technologies, evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiencies. Qualifications: β€’ Bachelor’s degree in Computer Science, Information Systems, or a related field, or equivalent experience. β€’ 2+ years’ experience with tools such as Databricks, Collibra, and Starburst. β€’ 3+ years’ experience with Python and PySpark. β€’ Experience using Jupyter notebooks, including coding and unit testing. β€’ Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling). β€’ 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake. β€’ Overall data engineering experience across traditional ETL & Big Data, either on-prem or Cloud. β€’ Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used. β€’ Experience building end-to-end data pipelines to ingest and process unstructured and semi-structured data using Spark architecture. Additional Requirements: β€’ This position requires access to confidential supervisory information, which is limited to β€œProtected Individuals.” Protected Individuals include, but are not limited to, U.S. citizens and U.S. nationals, U.S. permanent residents who are not yet eligible to apply for naturalization, and U.S. permanent residents who have applied for naturalization within six months of being eligible to do so or who will sign a declaration of intent to apply for naturalization before they begin employment.