

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Burbank, CA (Hybrid) for 4 months at $50 - $59/hr. Requires 3-5 years of experience with SQL, Python, and AWS services. Must be a US Citizen or authorized to work in the U.S.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
472
-
ποΈ - Date discovered
September 12, 2025
π - Project duration
3 to 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Burbank, CA
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Pipeline #Batch #Compliance #Data Architecture #IAM (Identity and Access Management) #Lambda (AWS Lambda) #Data Catalog #Databricks #Monitoring #SQL (Structured Query Language) #Airflow #Python #Security #Data Science #Datasets #Data Governance #AWS (Amazon Web Services) #Snowflake #Cloud #Data Quality #Data Engineering #S3 (Amazon Simple Storage Service) #Informatica #Redshift #Agile #Debugging
Role description
Title: Data Engineer
Location: Burbank, CA - Hybrid
Duration: 4 months to start
Compensation: $50 - 59/hr.
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
What We Do/Project
The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
β’ Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
β’ Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
β’ Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
β’ Implement data models defined by architects into physical schemas.
β’ Contribute to pipeline designs that align with canonical and semantic standards.
β’ Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
β’ Apply validation rules and monitoring to detect and surface data quality issues.
β’ Tag, document, and register new datasets in the enterprise data catalog.
β’ Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
β’ Actively participate in sprint ceremonies and backlog refinement.
β’ Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
β’ Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
1. Data Engineer experience or in a related role.
a. 3-5 years of experience
1. Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
a. 3-5 years of experience
1. Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
a. 3-5 years of experience
Nice to Have Skills / Preferred Requirements
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
1. Strong collaboration and mentoring skills; ability to influence across pods and domains.
Soft Skills:
1. Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes.
Technology Requirements:
1. Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
1. Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
1. Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
1. Strong problem-solving and debugging skills for pipeline operations.
Years experience:
β’ 3-5 years of experience as a data engineer or in a related role.
Our benefits package includes:
β’ Comprehensive medical benefits
β’ Competitive pay
β’ 401(k) retirement plan
β’ β¦and much more!
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
Title: Data Engineer
Location: Burbank, CA - Hybrid
Duration: 4 months to start
Compensation: $50 - 59/hr.
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
What We Do/Project
The Data Engineer is an integral member of the Platform Pod, focused on building, maintaining, and optimizing data pipelines that deliver trusted data to product pods, analysts, and data scientists. This role works closely with the Senior Data Engineer, Data Architect, and Cloud Architect to implement pipelines aligned with enterprise standards and program goals.
Job Responsibilities / Typical Day in the Role
Build & Maintain Pipelines
β’ Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions).
β’ Write efficient SQL and Python scripts for ingestion, transformation, and enrichment.
β’ Monitor pipeline health, troubleshoot issues, and ensure SLAs for data freshness.
Support Data Architecture & Models
β’ Implement data models defined by architects into physical schemas.
β’ Contribute to pipeline designs that align with canonical and semantic standards.
β’ Collaborate with application pods to deliver pipelines tailored to product features.
Ensure Data Quality & Governance
β’ Apply validation rules and monitoring to detect and surface data quality issues.
β’ Tag, document, and register new datasets in the enterprise data catalog.
β’ Follow platform security and compliance practices (e.g., Lake Formation, IAM).
Collaborate in Agile Pods
β’ Actively participate in sprint ceremonies and backlog refinement.
β’ Work closely with application developers, analysts, and data scientists to clarify requirements and unblock dependencies.
β’ Promote reuse of pipelines and shared services across pods.
Must Have Skills / Requirements
1. Data Engineer experience or in a related role.
a. 3-5 years of experience
1. Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
a. 3-5 years of experience
1. Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
a. 3-5 years of experience
Nice to Have Skills / Preferred Requirements
1. Proven ability to optimize pipelines for both batch and streaming use cases.
1. Knowledge of data governance practices, including lineage, validation, and cataloging.
1. Strong collaboration and mentoring skills; ability to influence across pods and domains.
Soft Skills:
1. Collaborative mindset: Willingness to work in agile pods and contribute to cross-functional outcomes.
Technology Requirements:
1. Hands-on experience with SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3).
1. Familiarity with orchestration tools (Airflow, Step Functions) and CI/CD workflows.
1. Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
1. Strong problem-solving and debugging skills for pipeline operations.
Years experience:
β’ 3-5 years of experience as a data engineer or in a related role.
Our benefits package includes:
β’ Comprehensive medical benefits
β’ Competitive pay
β’ 401(k) retirement plan
β’ β¦and much more!
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.