

Qualis1 Inc.
Senior Data Engineer – Level III
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Level III in Pleasanton, CA, for a 6-month contract. Requires 8–12+ years of IT experience, with 5+ in Data Engineering, and expertise in AWS Glue, PySpark, Redshift, and S3.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
October 29, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pleasanton, CA
-
🧠 - Skills detailed
#Data Pipeline #Data Quality #Data Warehouse #SQL (Structured Query Language) #Data Processing #PySpark #Data Governance #Data Modeling #Data Engineering #Lambda (AWS Lambda) #Compliance #GitHub #Code Reviews #Spark (Apache Spark) #Data Management #Cloud #Automation #Data Storage #Redshift #Snowflake #Jenkins #dbt (data build tool) #AWS Lambda #Data Security #ML (Machine Learning) #Data Ingestion #Version Control #Scala #Debugging #GIT #Kafka (Apache Kafka) #Data Architecture #AWS Glue #Computer Science #Programming #Security #Data Science #"ETL (Extract #Transform #Load)" #Amazon Redshift #Airflow #Athena #S3 (Amazon Simple Storage Service) #Storage #AWS (Amazon Web Services) #Metadata #Schema Design
Role description
Job Title: Senior Data Engineer – Level III
Location: Pleasanton, CA (Onsite)
Duration: 6 Months (Contract)
Experience Level: 8–12+ Years
Position Overview:
We are seeking a highly skilled Senior Data Engineer (Level III) with strong hands-on expertise in AWS cloud data technologies including Glue, PySpark, Redshift, and S3. The ideal candidate will have deep experience designing and developing data pipelines, ETL frameworks, and data models that power large-scale analytics and reporting systems.
This role requires a mix of technical depth, architectural thinking, and practical engineering skill to deliver scalable and efficient data solutions in a cloud-first environment.
Key Responsibilities:
• Design, develop, and optimize ETL/ELT data pipelines using AWS Glue, PySpark, and Lambda functions.
• Build and maintain data models and data warehouses using Amazon Redshift and S3.
• Develop and manage data ingestion frameworks from various structured and unstructured data sources.
• Implement data transformation, cleansing, and aggregation processes to support analytics and machine learning workloads.
• Collaborate with data scientists, analysts, and architects to define data architecture and integration standards.
• Leverage AWS services such as Glue Catalog, Athena, Redshift Spectrum, and CloudFormation for scalable data solutions.
• Ensure data security, compliance, and governance best practices across all environments.
• Participate in code reviews, establish Git branching strategies, and enforce version control discipline.
• Perform performance tuning and cost optimization of Redshift clusters and Glue jobs.
• Contribute to continuous improvement efforts in data quality, pipeline automation, and CI/CD for data engineering.
Required Skills & Experience:
• 8–12+ years of total IT experience with 5+ years in Data Engineering.
• Strong hands-on experience with:
• AWS Glue (ETL jobs, crawlers, catalog management)
• PySpark (distributed data processing and transformation)
• Amazon Redshift (data warehousing, schema design, performance tuning)
• Amazon S3 (data storage and partitioning strategies)
• Proven background in data modeling — both dimensional and relational.
• Proficient in Git for version control and collaborative development.
• Strong SQL programming and debugging skills.
• Experience with AWS Lambda, Athena, CloudWatch, and Step Functions is a plus.
• Excellent problem-solving, analytical, and communication skills.
Preferred Skills:
• Exposure to Airflow, DBT, or Snowflake.
• Experience implementing data governance frameworks or metadata management.
• Familiarity with CI/CD pipelines for data projects (using Jenkins, GitHub Actions, or CodePipeline).
• Experience supporting real-time streaming data using Kinesis or Kafka.
Education:
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
Job Title: Senior Data Engineer – Level III
Location: Pleasanton, CA (Onsite)
Duration: 6 Months (Contract)
Experience Level: 8–12+ Years
Position Overview:
We are seeking a highly skilled Senior Data Engineer (Level III) with strong hands-on expertise in AWS cloud data technologies including Glue, PySpark, Redshift, and S3. The ideal candidate will have deep experience designing and developing data pipelines, ETL frameworks, and data models that power large-scale analytics and reporting systems.
This role requires a mix of technical depth, architectural thinking, and practical engineering skill to deliver scalable and efficient data solutions in a cloud-first environment.
Key Responsibilities:
• Design, develop, and optimize ETL/ELT data pipelines using AWS Glue, PySpark, and Lambda functions.
• Build and maintain data models and data warehouses using Amazon Redshift and S3.
• Develop and manage data ingestion frameworks from various structured and unstructured data sources.
• Implement data transformation, cleansing, and aggregation processes to support analytics and machine learning workloads.
• Collaborate with data scientists, analysts, and architects to define data architecture and integration standards.
• Leverage AWS services such as Glue Catalog, Athena, Redshift Spectrum, and CloudFormation for scalable data solutions.
• Ensure data security, compliance, and governance best practices across all environments.
• Participate in code reviews, establish Git branching strategies, and enforce version control discipline.
• Perform performance tuning and cost optimization of Redshift clusters and Glue jobs.
• Contribute to continuous improvement efforts in data quality, pipeline automation, and CI/CD for data engineering.
Required Skills & Experience:
• 8–12+ years of total IT experience with 5+ years in Data Engineering.
• Strong hands-on experience with:
• AWS Glue (ETL jobs, crawlers, catalog management)
• PySpark (distributed data processing and transformation)
• Amazon Redshift (data warehousing, schema design, performance tuning)
• Amazon S3 (data storage and partitioning strategies)
• Proven background in data modeling — both dimensional and relational.
• Proficient in Git for version control and collaborative development.
• Strong SQL programming and debugging skills.
• Experience with AWS Lambda, Athena, CloudWatch, and Step Functions is a plus.
• Excellent problem-solving, analytical, and communication skills.
Preferred Skills:
• Exposure to Airflow, DBT, or Snowflake.
• Experience implementing data governance frameworks or metadata management.
• Familiarity with CI/CD pipelines for data projects (using Jenkins, GitHub Actions, or CodePipeline).
• Experience supporting real-time streaming data using Kinesis or Kafka.
Education:
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.






