Curate Partners

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 2–5 years of experience, focusing on GCP-based big data environments. Contract length is unspecified, with a pay rate of "unknown." Location is Dallas, TX (preferred) or remote (U.S. only). Key skills include SQL, Python, and data pipeline management.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Accuracy #Data Quality #Batch #Data Governance #Datasets #Automation #Data Engineering #Data Pipeline #Big Data #Data Processing #GCP (Google Cloud Platform) #Security #Cloud #"ETL (Extract #Transform #Load)" #Scala #SQL (Structured Query Language) #Data Access #Python
Role description
IMPORTANT Only U.S. Citizens or Green Card holders will be considered. No third-party candidates. Job Title: Data Engineer Experience Level: 2–5 Years Location: Dallas, TX (preferred) | Remote (U.S. only) Role Overview We are seeking a Data Engineer with 2–5 years of experience who can work confidently in a GCP-based big data environment, with a strong understanding of how data moves through pipelines, how permissions are managed, and how to make changes without disrupting production systems. This role requires a thoughtful, systems-oriented engineer who understands upstream and downstream dependencies and can safely move, transform, and validate data while maintaining pipeline stability. Key Responsibilities • Build, maintain, and support data pipelines in Google Cloud Platform (GCP) • Safely move, replicate, or transform data while understanding permissions, access controls, and pipeline dependencies • Analyze existing data workflows to assess impact before making changes • Troubleshoot data access, permission, and pipeline issues in production environments • Use SQL to validate data accuracy, completeness, and consistency • Use Python for data processing, automation, and pipeline support • Develop and support batch data processing jobs using GCP Dataproc • Monitor pipeline performance and data quality, escalating issues as needed • Collaborate with platform, security, and analytics teams to ensure proper data usage and governance • Document data flows, dependencies, and operational procedures Required Qualifications • 2–5 years of experience in data engineering or a closely related role • Hands-on experience with Google Cloud Platform (GCP) • Experience working with Dataproc and big data processing frameworks • Strong SQL skills for querying and validating large datasets • Working knowledge of Python for data processing and automation • Experience working in big data environments • Understanding of data permissions, access control, and security concepts • Ability to make changes carefully and thoughtfully in production pipelines Preferred Qualifications • Experience supporting production data pipelines in enterprise environments • Familiarity with data governance or regulated data environments • Experience working with analytics or reporting consumers downstream • Comfortable working independently with guidance on complex systems Why This Role Matters This role helps ensure data can be moved and accessed safely and reliably without breaking existing pipelines. Thoughtful execution and attention to dependencies are critical to maintaining trust in the data platform while enabling ongoing business needs.