Revature

Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include GCP, BigQuery, SQL, PySpark, and ETL tools like Qlik Data Integration or FiveTran.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Transformations #"ETL (Extract #Transform #Load)" #Data Quality #Qlik #BigQuery #Dataflow #Big Data #Datasets #Scala #Fivetran #PySpark #Data Engineering #Data Processing #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Cloud #Data Integration #Data Pipeline
Role description
As a GCP Data Pipeline Developer & BigQuery Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines on Google Cloud Platform. Your role will involve working with BigQuery to analyze and optimize large datasets, as well as implementing ETL processes to ensure data quality and integrity. Proficiency in foundational GCP services, BigQuery, data engineering principles, SQL, and PySpark will be essential for success in this position. Additionally, experience with ETL tools such as Qlik Data Integration or FiveTran would be a valuable asset in this role. In this role, you will have the opportunity to work on complex data projects that require a deep understanding of cloud-based data processing and analysis. Your expertise in GCP and BigQuery will be crucial in developing efficient data pipelines that meet the organization's data processing needs. Strong SQL skills will be necessary for querying and manipulating data within BigQuery, while experience with PySpark will enable you to perform advanced data transformations and analytics. Overall, your ability to leverage these tools and technologies effectively will be key to driving data-driven decision-making within the organization. As a GCP Data Pipeline Developer & BigQuery Data Engineer, you will play a critical role in shaping the organization's data infrastructure and analytics capabilities. Your experience in data engineering and cloud-based data processing will be instrumental in driving the success of data projects and initiatives. By utilizing your skills in GCP, BigQuery, SQL, and PySpark, you will be able to design and implement scalable data solutions that support the organization's data-driven goals. Additionally, your familiarity with ETL tools will enable you to streamline data integration processes and enhance overall data quality. Responsibilities: - Develop and maintain GCP data pipelines - Design and optimize BigQuery data structures - Utilize SQL and PySpark for data processing - Collaborate with team members on data engineering projects - Implement ETL processes using tools like GCP Dataflow, Qlik Data Integration, or FiveTran