GCP Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Cleveland, OH or Buffalo, NY, with a contract length of unspecified duration and a pay rate of "unknown." Requires 9+ years of experience, strong skills in ETL, GCP, Python, and SQL.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 23, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Cleveland, OH
-
🧠 - Skills detailed
#Data Quality #Data Accuracy #Data Pipeline #Big Data #Compliance #Data Security #Data Engineering #Distributed Computing #Programming #Cloud #SQL (Structured Query Language) #DevOps #Spark (Apache Spark) #Apache Spark #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Python #Data Integration #Data Architecture #Data Privacy #Java #Agile #Scala #Security #Computer Science #Database Design #Airflow #Data Warehouse #Data Profiling
Role description
Role: GCP Data Engineer Location: Cleveland, OH 44114 or Buffalo, NY (100% Onsite) Experience Level: 9+ years Contract What we ask for: We are seeking a highly motivated and skilled Data Engineer with strong background in data architecture, ETL processes, and data warehousing. As a Data Engineer, you will play a critical role in designing, developing, and maintaining our data pipelines, ensuring the availability of high-quality and reliable data for analytics and reporting. Responsibilities: β€’ Design, implement, and maintain scalable ETL pipelines to extract, transform, and load data from various sources into our data warehouse. β€’ Strong experience in GCP platform engineering β€’ Develop efficient and robust data integration processes that ensure data accuracy, consistency, and reliability. β€’ Perform data profiling and analysis to identify data quality issues and implement corrective measures. β€’ Optimize data pipelines for performance, scalability, and efficiency, considering data volume and query complexity. β€’ Implement data security and access controls to ensure data privacy and compliance with regulations. β€’ Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner. β€’ Collaborate with the DevOps team to deploy and manage data engineering solutions in a cloud-based environment. β€’ Document data engineering processes, data flow diagrams, and technical specifications. β€’ Stay current with industry trends, best practices, and emerging technologies in data engineering and analytics. Qualification: β€’ Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. β€’ Strong programming skills in languages such as Python or Java, Spark and SQL. β€’ Proven experience as a Data Engineer or similar role, with a track record of designing and implementing data pipelines. β€’ Proficiency in SQL for data profiling. β€’ Proficiency in ETL tools and frameworks (e.g., Apache Spark, Flink, Airflow) β€’ Familiarity with data warehousing concepts, dimensional modeling, and database design principles. β€’ Experience with Google Cloud (GCP) Platform Engineering. β€’ Understanding of Big Data technologies and distributed computing concepts. β€’ Strong problem-solving skills and attention to detail. β€’ Excellent communication and collaboration skills. β€’ Ability to work in an agile and fast-paced environment.