

Openkyber
Director of IT Governance
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Junior Data Engineer contract position based in Santa Clara, CA, offering $35-$40/hr. Requires 3+ years in data engineering, strong SQL and basic Python skills, with experience in ETL processes and data pipeline monitoring.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
320
-
🗓️ - Date
March 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alaska
-
🧠 - Skills detailed
#Documentation #Data Processing #Data Management #SQL (Structured Query Language) #Data Governance #"ETL (Extract #Transform #Load)" #SQL Queries #Databricks #Data Engineering #Computer Science #Snowflake #Data Quality #Python #Scripting #Monitoring #Automation #Data Pipeline #Security #Batch
Role description
We are looking for Junior Data Engineer - Remote / Telecommute for our client in Santa Clara, CA
Job Title: Junior Data Engineer - Remote / Telecommute
Job Location: Santa Clara, CA
Job Type: Contract
Job Description:
Pay Range: $35hr - $40hr
The Junior Data Engineer will support data operations and engineering activities including monitoring data pipelines, performing operational data loads, troubleshooting data issues, and assisting with system maintenance. The role also involves supporting system audits, documentation, and operational tasks to ensure the reliability and stability of data systems.
Requirement/Must Have:
Minimum three years of experience in data engineering, data operations, or ETL support roles.
Strong ability to write and troubleshoot SQL queries.
Basic Python scripting skills for automation and data validation.
Understanding of data pipelines, batch processing, and ETL concepts.
Strong problem-solving and analytical skills.
Ability to follow operational processes and handle support tasks efficiently.
Experience:
Experience supporting data pipelines and ETL processes.
Experience performing operational data loads and validating data processing results.
Experience troubleshooting data issues and resolving production defects.
Experience supporting operational systems and monitoring data workflows.
Experience maintaining technical documentation and operational procedures.
Responsibilities:
Support correction of production defects and perform ongoing system maintenance activities.
Execute and validate routine data loads and ensure successful processing of data workflows.
Assist with operational tasks such as user account creation and access-related support when required.
Participate in system audits, testing activities, and security or vulnerability remediation tasks under supervision.
Maintain documentation including runbooks and knowledge articles.
Document fixes and provide clear operational handoffs across teams and shifts.
Support operational mandates and special projects assigned within the service scope.
Monitor data processing jobs and assist in troubleshooting failed data pipelines.
Skills:
SQL query development and troubleshooting.
Python scripting for automation and data validation.
Data pipeline monitoring and ETL support.
Data quality validation and troubleshooting.
Incident handling and operational support processes.
Documentation and knowledge management.
Qualification And Education: Bachelor s degree in Computer Science, Information Technology, Data Engineering, or a related field.
Preferred Skills:
Exposure to modern data platforms such as Databricks or Snowflake.
Familiarity with workflow orchestration tools and data pipeline management.
Experience with ticketing or IT service management processes.
Understanding of data governance and data management principles.
For applications and inquiries, contact: hirings@openkyber.com
We are looking for Junior Data Engineer - Remote / Telecommute for our client in Santa Clara, CA
Job Title: Junior Data Engineer - Remote / Telecommute
Job Location: Santa Clara, CA
Job Type: Contract
Job Description:
Pay Range: $35hr - $40hr
The Junior Data Engineer will support data operations and engineering activities including monitoring data pipelines, performing operational data loads, troubleshooting data issues, and assisting with system maintenance. The role also involves supporting system audits, documentation, and operational tasks to ensure the reliability and stability of data systems.
Requirement/Must Have:
Minimum three years of experience in data engineering, data operations, or ETL support roles.
Strong ability to write and troubleshoot SQL queries.
Basic Python scripting skills for automation and data validation.
Understanding of data pipelines, batch processing, and ETL concepts.
Strong problem-solving and analytical skills.
Ability to follow operational processes and handle support tasks efficiently.
Experience:
Experience supporting data pipelines and ETL processes.
Experience performing operational data loads and validating data processing results.
Experience troubleshooting data issues and resolving production defects.
Experience supporting operational systems and monitoring data workflows.
Experience maintaining technical documentation and operational procedures.
Responsibilities:
Support correction of production defects and perform ongoing system maintenance activities.
Execute and validate routine data loads and ensure successful processing of data workflows.
Assist with operational tasks such as user account creation and access-related support when required.
Participate in system audits, testing activities, and security or vulnerability remediation tasks under supervision.
Maintain documentation including runbooks and knowledge articles.
Document fixes and provide clear operational handoffs across teams and shifts.
Support operational mandates and special projects assigned within the service scope.
Monitor data processing jobs and assist in troubleshooting failed data pipelines.
Skills:
SQL query development and troubleshooting.
Python scripting for automation and data validation.
Data pipeline monitoring and ETL support.
Data quality validation and troubleshooting.
Incident handling and operational support processes.
Documentation and knowledge management.
Qualification And Education: Bachelor s degree in Computer Science, Information Technology, Data Engineering, or a related field.
Preferred Skills:
Exposure to modern data platforms such as Databricks or Snowflake.
Familiarity with workflow orchestration tools and data pipeline management.
Experience with ticketing or IT service management processes.
Understanding of data governance and data management principles.
For applications and inquiries, contact: hirings@openkyber.com




