

PRI Global
Cloud Data Engineer Only on W2(USC or GC )
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Engineer with a contract length in St. Louis, MO (Hybrid). Pay is on W2 for USC or GC. Key skills include Python, Spark fundamentals, Databricks, and relational DB experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 3, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
St Louis, MO
-
π§ - Skills detailed
#GDPR (General Data Protection Regulation) #Data Engineering #Scala #Spark SQL #Oracle #Spark (Apache Spark) #SQL Server #Delta Lake #Python #API (Application Programming Interface) #AWS (Amazon Web Services) #Azure #Batch #SQL (Structured Query Language) #Data Ingestion #DBA (Database Administrator) #Databricks #Cloud
Role description
Cloud Data Engineering (Production Support)
St Louis, MO(Hybrid)
Contract
Local consultants will be considerded
Only on w2(USC or GC)
Core Responsibilities
β’ Production support for Databricks notebooks, Spark streaming apps, and APIs.
β’ Ownership of Level 1 / Bronze batch ingestion for ~60 data sources.
β’ Work includes:
β’ Data ingestion from various relational DBs or AWS β Delta Lake.
β’ Adding new tables/columns, change management, GDPR/purge processes.
β’ Product optimization: upgrades, resiliency work, performance improvements.
β’ Occasional new development for shared-service tools, not business-facing projects.
Required Technical Skills
β’ Python (must have)
β’ Spark fundamentals (architecture, DataFrames, Spark SQL)
β’ Databricks experience
β’ Relational DB experience (Oracle, SQL Server, Postgres; not a DBA)
β’ Understanding of Delta Lake
Nice-to-Haves:
β’ Spark streaming
β’ API development experience
β’ Azure experience preferred; AWS acceptable
β’ Scala not required
Cloud Data Engineering (Production Support)
St Louis, MO(Hybrid)
Contract
Local consultants will be considerded
Only on w2(USC or GC)
Core Responsibilities
β’ Production support for Databricks notebooks, Spark streaming apps, and APIs.
β’ Ownership of Level 1 / Bronze batch ingestion for ~60 data sources.
β’ Work includes:
β’ Data ingestion from various relational DBs or AWS β Delta Lake.
β’ Adding new tables/columns, change management, GDPR/purge processes.
β’ Product optimization: upgrades, resiliency work, performance improvements.
β’ Occasional new development for shared-service tools, not business-facing projects.
Required Technical Skills
β’ Python (must have)
β’ Spark fundamentals (architecture, DataFrames, Spark SQL)
β’ Databricks experience
β’ Relational DB experience (Oracle, SQL Server, Postgres; not a DBA)
β’ Understanding of Delta Lake
Nice-to-Haves:
β’ Spark streaming
β’ API development experience
β’ Azure experience preferred; AWS acceptable
β’ Scala not required





