

aKUBE
Senior Data Engineer – Commerce Data Pipelines
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focusing on Commerce Data Pipelines, lasting 10 months with a pay rate of up to $92.5/hr. Key skills include SQL, ETL design, data modeling, and experience with Snowflake or Redshift.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
736
-
🗓️ - Date
December 6, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Monica, CA
-
🧠 - Skills detailed
#Deployment #Python #PySpark #Data Pipeline #SQL (Structured Query Language) #Airflow #Snowflake #Programming #Redshift #Scala #Agile #Normalization #Data Modeling #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Monitoring #Data Quality #Data Engineering #NiFi (Apache NiFi)
Role description
City: Seattle, WA/ Santa Monica, CA or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Duration: 10 months
Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• SQL
• ETL design and development
• Data modeling (dimensional and normalization)
• ETL orchestration tools (Airflow or similar)
• Data Quality frameworks
• Performance tuning for SQL and ETL
• Python or PySpark
• Snowflake or Redshift
Responsibilities:
• Partner with business, analytics, and infrastructure teams to define data and reporting requirements.
• Collect data from internal and external systems and design table structures for scalable data solutions.
• Build, enhance, and maintain ETL pipelines with strong performance and reliability.
• Develop automated Data Quality checks and support ongoing pipeline monitoring.
• Implement database deployments using tools such as Schema Change.
• Conduct SQL and ETL tuning and deliver ad hoc analysis as needed.
• Support Agile ceremonies and collaborate in a fast-paced environment.
Qualifications:
• 3+ years of data engineering experience.
• Strong grounding in data modeling, including dimensional models and normalization.
• Deep SQL expertise with advanced tuning skills.
• Experience with relational or distributed data systems such as Snowflake or Redshift.
• Familiarity with ETL/orchestration platforms like Airflow or Nifi.
• Programming experience with Python or PySpark.
• Strong analytical reasoning, communication skills, and ability to work cross-functionally.
• Bachelor’s degree required.
City: Seattle, WA/ Santa Monica, CA or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Duration: 10 months
Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• SQL
• ETL design and development
• Data modeling (dimensional and normalization)
• ETL orchestration tools (Airflow or similar)
• Data Quality frameworks
• Performance tuning for SQL and ETL
• Python or PySpark
• Snowflake or Redshift
Responsibilities:
• Partner with business, analytics, and infrastructure teams to define data and reporting requirements.
• Collect data from internal and external systems and design table structures for scalable data solutions.
• Build, enhance, and maintain ETL pipelines with strong performance and reliability.
• Develop automated Data Quality checks and support ongoing pipeline monitoring.
• Implement database deployments using tools such as Schema Change.
• Conduct SQL and ETL tuning and deliver ad hoc analysis as needed.
• Support Agile ceremonies and collaborate in a fast-paced environment.
Qualifications:
• 3+ years of data engineering experience.
• Strong grounding in data modeling, including dimensional models and normalization.
• Deep SQL expertise with advanced tuning skills.
• Experience with relational or distributed data systems such as Snowflake or Redshift.
• Familiarity with ETL/orchestration platforms like Airflow or Nifi.
• Programming experience with Python or PySpark.
• Strong analytical reasoning, communication skills, and ability to work cross-functionally.
• Bachelor’s degree required.






