New York Technology Partners

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of over 6 months, offering a remote position in the US (CST hours). Key skills include Databricks, DBT, AWS, SQL, and Python. Experience mentoring junior engineers is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#dbt (data build tool) #Scala #Databricks #PySpark #AWS (Amazon Web Services) #Python #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Spark (Apache Spark)
Role description
Title: Sr. Data Engineer Location: Remote US, CST hours Must Have Skills: • Strong hands-on experience with Databricks. • Experience with DBT in a production environment • Proficiency in AWS data services. • Extensive SQL experience, including performance tuning. • Python experience • Excellent communication skills and stakeholder engagement experience. • Proven ability to work independently and collaboratively in a fast-paced environment. Plus • Strong hands-on experience with PySpark • Long-term mindset with a desire to grow within the team. • Willingness to be hands-on and contribute to development work for at least the next two quarters. • Experience mentoring junior engineers or working in a team with varying experience levels. • A collaborative, coachable, and team-first attitude. Day-to-Day We are seeking hands-on, technically strong Data Engineers to join our growing Data Engineering & Enablement team. This team is responsible for ingesting, refining, and productizing data from various sources into our Enterprise Data Platform (EDP), enabling business stakeholders to derive meaningful insights and metrics. You’ll work closely with cross-functional teams including Digital Experience, Advisor Experience, and Finance, helping to build scalable data assets that power key business decisions. Key Responsibilities: • Develop and maintain data pipelines using DBT, PySpark, Databricks, and AWS. • Perform SQL performance tuning and optimize data workflows. • Collaborate with stakeholders to understand data needs and translate them into technical solutions. • Become a subject matter expert (SME) in specific data domains (e.g., digital identity, client behavior). • Contribute to a collaborative, learning-focused team environment. • Mentor junior engineers and serve as a technical role model.