

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract basis, requiring strong SQL and data pipeline skills, experience with GCP, and a bachelor's degree in computer science or engineering. Financial industry experience is a plus.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 5, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Architecture #"ETL (Extract #Transform #Load)" #Data Processing #BigQuery #Data Quality #GCP (Google Cloud Platform) #Cloud #Computer Science #Databases #Data Lake #Data Engineering #Data Pipeline #SQL (Structured Query Language)
Role description
Responsibilities
β’ Collaborates with stakeholders to understand data requirements
β’ Reverse-engineers existing solution to derive additional data requirements
β’ Designs and implements SQL-based data pipelines to ingest, store, transform, and curate data from various internal and external sources including on-premise databases, flat files, and cloud-based systems
β’ Investigates and mitigates data quality issues
Skills & Expertise
β’ Must have in-depth understanding of modern cloud-based data architecture and distributed data processing
β’ Must have proven capability to setup data lake architecture on GCP data ecosystem
β’ Must have strong SQL writing and optimizing skills for cloud-based relational database platforms (BigQuery is a must-have)
β’ Must have strong proficiency using SQL to build data pipelines
β’ Must be able to work with minimal direction and adapt quickly to complex systems
β’ Must be able to work independently with minimal supervision and assistance
β’ Must have bachelorβs degree in computer science or engineering-related field or equivalent work experience
β’ Should have experience working with team sizes greater than 4 team members
β’ Nice to have financial industry experience
Responsibilities
β’ Collaborates with stakeholders to understand data requirements
β’ Reverse-engineers existing solution to derive additional data requirements
β’ Designs and implements SQL-based data pipelines to ingest, store, transform, and curate data from various internal and external sources including on-premise databases, flat files, and cloud-based systems
β’ Investigates and mitigates data quality issues
Skills & Expertise
β’ Must have in-depth understanding of modern cloud-based data architecture and distributed data processing
β’ Must have proven capability to setup data lake architecture on GCP data ecosystem
β’ Must have strong SQL writing and optimizing skills for cloud-based relational database platforms (BigQuery is a must-have)
β’ Must have strong proficiency using SQL to build data pipelines
β’ Must be able to work with minimal direction and adapt quickly to complex systems
β’ Must be able to work independently with minimal supervision and assistance
β’ Must have bachelorβs degree in computer science or engineering-related field or equivalent work experience
β’ Should have experience working with team sizes greater than 4 team members
β’ Nice to have financial industry experience