

BigQuery Data Engineer | W2 | 10 Years+ | GCP
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a BigQuery Data Engineer with 10+ years of experience, focusing on GCP. It offers a remote contract position with a competitive pay rate. Key skills include Python, SQL, and data pipeline development, specifically with BigQuery.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 17, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Data Science #Monitoring #Kafka (Apache Kafka) #Snowflake #Cloud #Data Pipeline #Migration #Data Processing #Delta Lake #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Airflow #Dremio #Data Engineering #Java #Apache Iceberg #BigQuery #Databases #NoSQL #Datasets #Scala #GIT #dbt (data build tool) #Logging #Spark (Apache Spark) #Apache Spark #Python #SQL (Structured Query Language) #Version Control #AWS (Amazon Web Services)
Role description
Job Description Below:-
Job Title: Senior Data Engineer
Location: Remote
Description
We are seeking a hands-on Data Engineer to join our team and contribute to critical backend components that are being migrated to a modern cloud environment (AWS or GCP). This role involves close collaboration with the architecture team to design and develop scalable, high-performance data pipelines and backend services to support advanced analytics and data science initiatives.
Key Responsibilities:
• Support cloud migration of backend and data components (AWS/GCP)
• Collaborate with architects to design robust systems and integrations
• Design, develop, and maintain data pipelines and ELT workflows
• Implement solutions enabling Data Scientists to access, transform, and analyze data
• Write clean, efficient code in Python, with occasional Java involvement
• Work with SQL/NoSQL databases and build scalable data models
• Engage with CI/CD tools, monitoring systems, and infrastructure
• Ensure reliability and performance in production environments
• Troubleshoot and debug data pipeline and infrastructure issues
• Translate business needs into reliable datasets and models
• Follow best practices in modern data engineering and cloud development
Must-Have Qualifications:
• 10+ years of experience in data engineering or backend development
• Strong hands-on coding experience in Python
• Experience with cloud platforms – GCP preferred, AWS acceptable
• Hands-on experience with BigQuery
• Strong SQL skills; experience with NoSQL databases
• Familiarity with distributed data processing tools like Apache Spark
• Experience integrating with backend/data services
• Solid understanding of CI/CD, version control (Git), and testing practices
• Strong analytical, communication, and time management skills
Nice-to-Have Skills:
• Experience with DBT (Data Build Tool) or BigQuery Dataform
• Familiarity with Java (basic to intermediate level)
• Experience with Snowflake, Dremio, or similar data platforms
• Knowledge of Apache Iceberg, Delta Lake, or open table formats
• Familiarity with orchestration tools like Airflow, Prefect, or Dagster
• Exposure to real-time data tools (e.g., Kafka, Spark Streaming)
• Experience with monitoring and logging tools in cloud environments
Job Description Below:-
Job Title: Senior Data Engineer
Location: Remote
Description
We are seeking a hands-on Data Engineer to join our team and contribute to critical backend components that are being migrated to a modern cloud environment (AWS or GCP). This role involves close collaboration with the architecture team to design and develop scalable, high-performance data pipelines and backend services to support advanced analytics and data science initiatives.
Key Responsibilities:
• Support cloud migration of backend and data components (AWS/GCP)
• Collaborate with architects to design robust systems and integrations
• Design, develop, and maintain data pipelines and ELT workflows
• Implement solutions enabling Data Scientists to access, transform, and analyze data
• Write clean, efficient code in Python, with occasional Java involvement
• Work with SQL/NoSQL databases and build scalable data models
• Engage with CI/CD tools, monitoring systems, and infrastructure
• Ensure reliability and performance in production environments
• Troubleshoot and debug data pipeline and infrastructure issues
• Translate business needs into reliable datasets and models
• Follow best practices in modern data engineering and cloud development
Must-Have Qualifications:
• 10+ years of experience in data engineering or backend development
• Strong hands-on coding experience in Python
• Experience with cloud platforms – GCP preferred, AWS acceptable
• Hands-on experience with BigQuery
• Strong SQL skills; experience with NoSQL databases
• Familiarity with distributed data processing tools like Apache Spark
• Experience integrating with backend/data services
• Solid understanding of CI/CD, version control (Git), and testing practices
• Strong analytical, communication, and time management skills
Nice-to-Have Skills:
• Experience with DBT (Data Build Tool) or BigQuery Dataform
• Familiarity with Java (basic to intermediate level)
• Experience with Snowflake, Dremio, or similar data platforms
• Knowledge of Apache Iceberg, Delta Lake, or open table formats
• Familiarity with orchestration tools like Airflow, Prefect, or Dagster
• Exposure to real-time data tools (e.g., Kafka, Spark Streaming)
• Experience with monitoring and logging tools in cloud environments