

Strategic Staffing Solutions
Senior Database Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Database Engineer contract position for 18+ months in Charlotte, NC/Iselin, NJ, paying $60/hr on W2. Requires 5+ years in data engineering, expertise in SQL, ETL, PySpark, and experience with GCP and Teradata.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 17, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Storage #Dremio #Datasets #"ETL (Extract #Transform #Load)" #BigQuery #Data Engineering #Cloud #Scala #GCP (Google Cloud Platform) #S3 (Amazon Simple Storage Service) #Ab Initio #Data Pipeline #Spark (Apache Spark) #Teradata #SQL (Structured Query Language) #PySpark #Data Quality #Migration #Hadoop
Role description
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
βBeware of scams. S3 never asks for money during its onboarding process.β
Job Title: Senior Database Engineer
Contract Length:18+ Month contract
Some on site work
Location: Charlotte, NC/ ISELIN, NJ 08830
Pay: 60 per hr on w2
We are seeking a Senior-level Database Engineer to design, build, and optimize large-scale data pipelines within a high-volume enterprise data environment. This role supports critical applications tied to fraud and claims analysis, working across legacy and modern cloud platforms.
The environment is undergoing a major transformation from Teradata to Google Cloud Platform (GCP), requiring hands-on engineering expertise in both existing and target-state architectures.
Key Responsibilities
β’ Design, develop, and maintain scalable ETL/data pipeline solutions
β’ Work with large-scale datasets (hundreds of terabytes across hundreds of tables)
β’ Support migration efforts from Teradata to GCP (BigQuery-based ecosystem)
β’ Build and optimize pipelines using PySpark and ETL frameworks
β’ Collaborate with stakeholders across fraud and analytics teams to support data needs
β’ Ensure performance, reliability, and data quality across pipeline workflows
β’ Troubleshoot and resolve production issues in distributed data environments
β’ Work within scheduling and orchestration tools to manage pipeline execution
Required Qualifications
β’ 5+ years of data engineering or software engineering experience
β’ Strong expertise in:
β’ SQL
β’ ETL development
β’ PySpark
β’ Hands-on experience with:
β’ Autosys (job scheduling)
β’ Ab Initio
β’ Experience building and maintaining large-scale data pipelines
β’ Ability to work in hybrid environments (on-prem + cloud)
Preferred Qualifications
β’ Experience with Google Cloud Platform (GCP), especially BigQuery
β’ Prior experience with Teradata
β’ Familiarity with Hadoop ecosystem
β’ Exposure to tools such as Dremio and distributed storage systems
β’ Cloud certifications (GCP preferred)
Technical Environment
β’ Current: Teradata-based platform
β’ Target: GCP (BigQuery ecosystem)
β’ Tools & Technologies:
β’ PySpark
β’ Hadoop
β’ Ab Initio
β’ Autosys
β’ Dremio
β’ S3-compatible storage systems
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
βBeware of scams. S3 never asks for money during its onboarding process.β
Job Title: Senior Database Engineer
Contract Length:18+ Month contract
Some on site work
Location: Charlotte, NC/ ISELIN, NJ 08830
Pay: 60 per hr on w2
We are seeking a Senior-level Database Engineer to design, build, and optimize large-scale data pipelines within a high-volume enterprise data environment. This role supports critical applications tied to fraud and claims analysis, working across legacy and modern cloud platforms.
The environment is undergoing a major transformation from Teradata to Google Cloud Platform (GCP), requiring hands-on engineering expertise in both existing and target-state architectures.
Key Responsibilities
β’ Design, develop, and maintain scalable ETL/data pipeline solutions
β’ Work with large-scale datasets (hundreds of terabytes across hundreds of tables)
β’ Support migration efforts from Teradata to GCP (BigQuery-based ecosystem)
β’ Build and optimize pipelines using PySpark and ETL frameworks
β’ Collaborate with stakeholders across fraud and analytics teams to support data needs
β’ Ensure performance, reliability, and data quality across pipeline workflows
β’ Troubleshoot and resolve production issues in distributed data environments
β’ Work within scheduling and orchestration tools to manage pipeline execution
Required Qualifications
β’ 5+ years of data engineering or software engineering experience
β’ Strong expertise in:
β’ SQL
β’ ETL development
β’ PySpark
β’ Hands-on experience with:
β’ Autosys (job scheduling)
β’ Ab Initio
β’ Experience building and maintaining large-scale data pipelines
β’ Ability to work in hybrid environments (on-prem + cloud)
Preferred Qualifications
β’ Experience with Google Cloud Platform (GCP), especially BigQuery
β’ Prior experience with Teradata
β’ Familiarity with Hadoop ecosystem
β’ Exposure to tools such as Dremio and distributed storage systems
β’ Cloud certifications (GCP preferred)
Technical Environment
β’ Current: Teradata-based platform
β’ Target: GCP (BigQuery ecosystem)
β’ Tools & Technologies:
β’ PySpark
β’ Hadoop
β’ Ab Initio
β’ Autosys
β’ Dremio
β’ S3-compatible storage systems






