Jobs via Dice

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with 13+ years of experience, focusing on scalable data solutions and ETL pipelines using Spark and PySpark. Contract duration is 12 months, remote USA (eastern time-zone), requiring strong SQL, AWS, and Databricks expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#SQL Queries #PySpark #Unix #GIT #Teradata SQL #Consulting #Scala #Artifactory #DevOps #Data Lake #Jenkins #Spark SQL #Shell Scripting #Spark (Apache Spark) #Cloud #Azure #SQL (Structured Query Language) #Delta Lake #IAM (Identity and Access Management) #Lambda (AWS Lambda) #Scripting #AWS (Amazon Web Services) #Linux #SNS (Simple Notification Service) #Teradata #Agile #S3 (Amazon Simple Storage Service) #Databricks #Data Modeling #"ETL (Extract #Transform #Load)" #Jira #EC2 #Data Warehouse #Data Engineering #Server Administration #SQS (Simple Queue Service)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, GoAhead Solutions, is seeking the following. Apply via Dice today! Seeking a Sr Data Engineer with at least 13 years of experience. Must have prior exp. working with business & I.T. leaders and have a true consulting mindset vs. an order taker. Duration: contract position/12 months Work location: Remote USA (eastern time-zone work hours) Duties: β€’ Interact with business & technical stakeholders to gather & understand requirements. β€’ Design scalable data solutions & document technical designs. β€’ Develop production-grade, high-performance ETL pipelines using Spark & PySpark. β€’ Perform data modeling to support business requirements. β€’ Write optimized SQL queries using Teradata SQL, Hive SQL, and Spark SQL across platforms such as Teradata and Databricks Unity Catalog. β€’ Implement CI/CD pipelines to deploy code artifacts to platforms like AWS & Databricks. β€’ Orchestrate Databricks jobs using Databricks Workflows. β€’ Monitor production jobs, troubleshoot issues, and implement effective solutions. β€’ Participate in agile ceremonies including sprint planning, grooming, daily stand-ups, demos, & retrospectives. Exp/Skills: β€’ Strong hands-on exp. with Spark, PySpark, Shell scripting, Teradata & Databricks. β€’ Proficient writing complex SQL queries & stored procedures. β€’ Prior exp. implementing Databricks Data Lake & Data Warehouse. β€’ Familiarity with agile methodologies & DevOps tools such as Git, Jenkins & Artifactory. β€’ Exp. with Unix/Linux shell scripting (KSH) & basic Unix server administration. β€’ Knowledge of job scheduling tools like CA7 Enterprise Scheduler. β€’ Hands-on exp. with AWS services including S3, EC2, SNS, SQS, Lambda, ECS, Glue, IAM, and CloudWatch. β€’ Expertise in Databricks components such as Delta Lake, Notebooks, Pipelines, cluster management & cloud integration (Azure/AWS). β€’ Exp. with Jira and/or Confluence. β€’ Demonstrated creativity, foresight & sound judgment in planning & delivering technical solutions.