

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown". Required skills include GCP, Spark, BigQuery, SQL, and Scala/Python. A GCP Certification and a Master's Degree in a related field are mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Data Engineering #Data Pipeline #Teradata #Data Science #BigQuery #HBase #Hadoop #Compliance #Agile #Data Modeling #Data Quality #Data Governance #Big Data #Data Integrity #REST (Representational State Transfer) #UAT (User Acceptance Testing) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Monitoring #Data Architecture #Computer Science #Data Management #Migration #Security #Cloud #GCP (Google Cloud Platform) #Python #Spark (Apache Spark) #Scala #Logging #REST API #BI (Business Intelligence) #Informatica BDM (Big Data Management) #Data Ingestion
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Weβre seeking a Senior Data Engineer with strong experience in GCP, Spark, BigQuery, SQL, and Scala/Python to support the migration of data workloads from a Hadoop On-Prem cluster to GCP. This role will involve building scalable, high-performance pipelines, designing robust data models, ensuring data quality, and supporting BI/reporting needs in a highly collaborative environment.
Responsibilities
β’ Build and optimize data pipelines in GCP using Spark on DataProc and BigQuery
β’ Design and implement data models/schemas to support analytical and operational use cases
β’ Develop automated data quality checks and ensure data integrity and SLA compliance
β’ Contribute to data ingestion from varied sources (DBMS, APIs, file systems, streaming)
β’ Translate complex business requirements into scalable technical solutions
β’ Support development of dashboards and semantic layers, integrating data from Teradata, Hive, HBase, and BigQuery
β’ Build REST APIs for hosted models, ensuring proper logging, authentication, and error handling
β’ Collaborate with analysts and data scientists to produce models and KPIs
β’ Lead migration efforts from Hadoop to GCP, ensuring security, governance, and performance
β’ Document architecture, perform testing (CIT/SIT/UAT), and support platform monitoring
β’ Ensure alignment with enterprise data governance, logging, and security policies
Qualifications
β’ GCP / BigQuery β 1+ years of hands-on development
β’ Scala or Python β 3+ years building scalable data applications
β’ SQL β 3+ years of advanced querying and data modeling
Required Skills
β’ Experience with LookML or semantic layer design
β’ GCP Certification (Associate or Professional)
β’ Masterβs Degree in a related field (Computer Science, Data Engineering, etc.)
Preferred Skills
β’ Languages/Tools: Scala, Python, SQL, Spark, BigQuery, REST APIs
β’ Platforms: GCP (DataProc, BigQuery), Hadoop (On-Prem), Teradata, Hive, HBase
β’ Concepts: Agile Development, Cloud Data Architecture, Big Data Management, ETL, Performance Optimization, Data Governance