Acuity Analytics

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position focused on building scalable data pipelines on Google Cloud Platform. Requires 7+ years of experience in data engineering, strong Python and SQL skills, and familiarity with ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Scala #Data Engineering #Snowflake #Delta Lake #Cloud #Data Processing #Storage #Programming #Data Modeling #Datasets #GCP (Google Cloud Platform) #Data Pipeline #Observability #Data Warehouse #Python #Spark (Apache Spark) #SQL (Structured Query Language)
Role description
Acuity is seeking Data Engineers (Contract) to join the Data Onboarding Engineering team. This role is focused on building and operating robust, scalable data pipelines that ingest and process 30+ TB of data daily, primarily using Python on Google Cloud Platform (GCP). The engineer will collaborate closely with business partners, researchers, and trading teams to onboard high-value datasets that directly power systematic trading and research workflows. The ideal candidate is highly hands-on, production-focused, and comfortable operating in a high-performance, data-intensive environment. Essential Skills • 7+ years of professional experience as a Data Engineer or in a similar role • 7+ years of hands-on experience building ETL pipelines in production environments • Strong Python programming skills for data processing and pipeline development • Practical experience with cloud-based data platforms, preferably Google Cloud Platform (GCP) • Solid understanding of data operations, including ingestion, processing, storage, quality, and lifecycle management • Strong SQL skills and familiarity with data modeling concepts Nice-to-Have Skills • Experience with Snowflake as a cloud data warehouse • Exposure to Spark or other distributed data processing frameworks • Familiarity with Lakehouse concepts (Delta Lake or similar formats) • Experience with event-driven or streaming data pipelines • Background working with financial, market, or alternative datasets • Knowledge of data observability, lineage, and governance tooling