Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, remote (EST/CST), with a 12-month contract at a competitive pay rate. Candidates must have 8–10 years of data engineering experience, strong ETL and coding skills in SQL, Python, and PySpark, and familiarity with Azure and Databricks. Financial services experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
696
-
🗓️ - Date discovered
September 25, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #Databricks #Data Analysis #Data Modeling #"ETL (Extract #Transform #Load)" #Web Development #Azure #Programming #Scala #Microsoft Power BI #PySpark #BI (Business Intelligence) #AI (Artificial Intelligence) #Spark (Apache Spark) #Cloud #Datasets #Tableau #SQL (Structured Query Language) #Python
Role description
MUST BE ON W2 - NO SUBVENFING/THIRD PARTY RECRUITERS PERMITTED Senior Data Engineer Start Date: ASAP Duration: 12 months (extension and conversion possible, based on performance/business needs) Location: Remote (EST or CST time zones only) Summary: We are seeking an experienced Data Engineer to support the development of a complex data model focused on financial crime analytics. The role involves designing and building robust ETL pipelines, curating high-quality data, and enabling downstream analytical work. The successful candidate will work with cross-functional teams in a highly collaborative environment, with exposure to advanced AI use cases. Key Responsibilities: • Develop and maintain complex data models (structured and unstructured data, star schemas, SCD tables, medallion architecture). • Design, build, and optimize ETL pipelines; perform exploratory data analysis to validate and reconcile data sources. • Collaborate with stakeholders to identify requirements and deliver curated datasets. • Apply coding expertise to build scalable data solutions (SQL, Python, PySpark). • Work on Azure and Databricks cloud platforms. Must-Have Skills: • 8–10 years’ experience in data engineering. • Proficiency in data modeling (OLAP vs. OLTP, medallion architecture). • Strong ETL pipeline development and validation skills. • Hands-on coding in SQL, Python, PySpark. • Experience with Azure and Databricks. Nice-to-Have Skills: • Financial services/banking data experience (accounts, transactions, AML, risk). • Exposure to LLMs (prompt engineering, training/fine-tuning). • Programming or web development background. • BI tools such as Tableau or Power BI.