Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Salt Lake City, offering a 12+ month contract. Key skills include 6+ years in Spark (PySpark), Python, SQL, and Databricks. A Bachelor's in Computer Science is required; Agile methodologies experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 24, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Salt Lake City, UT
🧠 - Skills detailed
#Kanban #GIT #Data Lake #Data Pipeline #Docker #Agile #Computer Science #Cloud #PySpark #Spark (Apache Spark) #Kubernetes #Migration #Databricks #Python #Airflow #Scrum #Kafka (Apache Kafka) #Code Reviews #SQL (Structured Query Language) #Azure #Data Engineering #Linux #Documentation #GCP (Google Cloud Platform)
Role description

Dice is the leading career destination for tech experts at every stage of their careers. Our client, VLink Inc, is seeking the following. Apply via Dice today!

Position : Senior Data Engineer

Location: Salt Lake City- MUST be onsite

Contract: 12+ Months

Job Description:

Your future duties and responsibilities

How You'll Make An Impact

Play key role in establishing and implementing migration patterns for the Data Lake Modernization project

Actively migrate use cases from our on-premises Data Lake to Databricks on Google Cloud Platform

Collaborate with Product Management and business partners to understand case requirements and reporting

Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation)

Document and showcase feature designs/workflows

Participate in team meetings and discussions around product development

Stay up to date on industry latest industry trends and design patterns

What You'll Bring

Required qualifications to be successful in this role

6+ years development experience with Spark (PySpark), Python and SQL

Extensive knowledge building data pipelines

Hands on experience with Databricks Development

Strong experience developing on Linux OS

Experience with scheduling and orchestration (e.g. Databricks Workflows, airflow, prefect, control-m)

Solid understanding of distributed systems, data structures, design principles

Comfortable communicating with teams via showcases/demos

Agile Development Methodologies (e.g. SAFe, Kanban, Scrum)

Bachelor's in Computer Science, Computer Engineering or related field

Desired qualifications (Nice to Have):

3+ years experience with GIT

3+ years experience with CI/CD (e.g. Azure Pipelines)

Experience with streaming technologies, such as Kafka, Spark

Experience building applications on Docker and Kubernetes

Cloud experience (e.g. Azure, Google)