

Santcore Technologies
Data Engineer II (GCP & SQL)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II (GCP & SQL) in Charlotte, NC, lasting 6+ months, with a pay rate of "unknown." Requires financial services experience and strong skills in PySpark, SQL, Python, and GCP services, including BigQuery and Dataflow.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
May 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#BigQuery #Storage #Spark SQL #Cloud #Data Pipeline #Spark (Apache Spark) #GCP (Google Cloud Platform) #PySpark #Indexing #Compliance #Python #"ETL (Extract #Transform #Load)" #Database Design #SQL (Structured Query Language) #Data Modeling #Big Data #Scala #Security #Data Processing #Dataflow #Data Engineering
Role description
Job Title: Data Engineer II (GCP & SQL)
Location: Charlotte, NC (Onsite In-Person Interview Required)
Interview Address: 2151 Hawkins Street, 12th Floor, Charlotte, NC
Duration: 6+ Months
Interview Process
In-person interview required
Glider test required
Interview timeslots scheduled for next week
Industry Requirement
Financial services experience required
Role Overview
The Data Engineer II will be part of a dynamic and growing team within the Global Risk &
Compliance Technology (GRCT) organization. The role focuses on evaluating data requirements,
building scalable data pipelines, and maintaining data models to support business and
technology needs.
Key Responsibilities
Evaluate and document data requirements for seamless integration into existing
architectures.
Build and enhance data pipelines and database designs ensuring performance,
scalability, and security.
Maintain and support data models aligned with business requirements.
Participate in design reviews, testing, and provide production support with guidance
from team members.
Ensure data assets are managed according to established standards, guidelines, and
policies.
Perform work reviews and contribute to a collaborative team environment.
Communicate and collaborate with business and product teams to implement changes
effectively.
Implement Big Data solutions including partitioning and indexing strategies.
Align technology initiatives with business objectives through cross-functional
collaboration.
Required Skills & Experience
Strong experience with PySpark, SQL, Python, and GCP.
Hands-on experience with GCP services including BigQuery, Dataflow, Cloud Storage,
Pub/Sub, Cloud Composer, and Cloud Functions.
Proficiency in Python and SQL for data processing and manipulation.
Experience with ETL/ELT processes and data warehousing concepts.
Understanding of data modeling principles and techniques.
Knowledge of cloud computing concepts and best practices.
Job Title: Data Engineer II (GCP & SQL)
Location: Charlotte, NC (Onsite In-Person Interview Required)
Interview Address: 2151 Hawkins Street, 12th Floor, Charlotte, NC
Duration: 6+ Months
Interview Process
In-person interview required
Glider test required
Interview timeslots scheduled for next week
Industry Requirement
Financial services experience required
Role Overview
The Data Engineer II will be part of a dynamic and growing team within the Global Risk &
Compliance Technology (GRCT) organization. The role focuses on evaluating data requirements,
building scalable data pipelines, and maintaining data models to support business and
technology needs.
Key Responsibilities
Evaluate and document data requirements for seamless integration into existing
architectures.
Build and enhance data pipelines and database designs ensuring performance,
scalability, and security.
Maintain and support data models aligned with business requirements.
Participate in design reviews, testing, and provide production support with guidance
from team members.
Ensure data assets are managed according to established standards, guidelines, and
policies.
Perform work reviews and contribute to a collaborative team environment.
Communicate and collaborate with business and product teams to implement changes
effectively.
Implement Big Data solutions including partitioning and indexing strategies.
Align technology initiatives with business objectives through cross-functional
collaboration.
Required Skills & Experience
Strong experience with PySpark, SQL, Python, and GCP.
Hands-on experience with GCP services including BigQuery, Dataflow, Cloud Storage,
Pub/Sub, Cloud Composer, and Cloud Functions.
Proficiency in Python and SQL for data processing and manipulation.
Experience with ETL/ELT processes and data warehousing concepts.
Understanding of data modeling principles and techniques.
Knowledge of cloud computing concepts and best practices.






