FUSTIS LLC

Data Analytics Engineer (Multiple Location)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Analytics Engineer, offering $60/hr W2 for a hybrid position in Seattle, Dallas, Minneapolis, or Miramar. Requires experience with Scala, Databricks, Redshift, and data pipelines. USC, GC, or H4 EAD candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
October 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Databricks #Data Pipeline #Data Science #Scala #Data Quality #Redshift
Role description
Job Role: Data Analytics Engineer Location: Seattle, WA, Dallas, TX, Minneapolis, MN or Miramar, FL - Need Locals Only Rate: $60/hr W2 Hybrid 1-2 days per week USC, GC, H4 EAD ONLY Job Description: We are seeking a Senior Analytics Engineer to build and maintain scalable data infrastructure that supports measurement, attribution, and experimentation. This role focuses on enabling causal analysis and performance evaluation through reliable data pipelines and optimized data models. The engineer will work closely with Data Science, Commercial, Digital, and Technology teams to ensure data is structured for decision-making. Job Responsibilities • Build and maintain data pipelines for attribution modeling, experimentation, and causal analysis. • Develop and optimize data models and semantic layers for performance measurement. • Collaborate with data scientists to operationalize experimental designs and measurement frameworks. • Ensure data quality, consistency, and accessibility across platforms. • Implement scalable solutions using Databricks and Redshift. • Translate business requirements into technical specifications. • Build and maintain data pipelines for attribution modeling, experimentation, and causal analysis. • Develop and optimize data models and semantic layers for performance measurement. • Collaborate with data scientists to operationalize experimental designs and measurement frameworks. • Ensure data quality, consistency, and accessibility across platforms. • Implement scalable solutions using Databricks and Redshift. • Translate business requirements into technical specifications.