Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, hybrid in Portland, OR, with a pay rate of $65-75/hour. Requires 5+ years in data engineering, strong SQL, Python/Scala/Java skills, and experience with Azure and Databricks.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
600
🗓️ - Date discovered
May 18, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Documentation #GCP (Google Cloud Platform) #Java #Azure Databricks #Fivetran #Microsoft Power BI #Azure #Data Pipeline #SQL (Structured Query Language) #Data Replication #Data Processing #Cloud #Data Modeling #Data Science #Python #AWS (Amazon Web Services) #Databricks #Scala #Data Warehouse #Replication #"ETL (Extract #Transform #Load)" #Azure DevOps #GitHub #BI (Business Intelligence) #Data Engineering #DevOps #Automation
Role description
TITLE: Senior Data Engineer LOCATION: Hybrid – Portland, OR (1-2 days/week onsite) PAY: Target pay for this role is $65-75 per hour but may vary based on experience ENGAGEMENT TYPE: Contract (6-months with potential for extension or conversion to FTE) WHAT YOU’LL BE DOING We’re seeking a Senior Data Engineer to help drive a large-scale Data Transformation project, migrating from an on-prem data warehouse to Azure and modernizing analytics with Databricks and Power BI. This is a hands-on engineering role focused on building, optimizing, and automating data pipelines and infrastructure. Responsibilities include: Designing, building, and optimizing data pipelines for ingestion, transformation, and delivery Migrating data workloads to Azure and Databricks Collaborating with analysts, data scientists, and engineers on analytics initiatives Driving automation, scalability, and process improvements Maintaining documentation, requirements, and test plans Supporting CI/CD workflows and ensuring best practices are followed WHO WE’RE LOOKING FOR 5+ years of experience in data engineering with cloud data platforms (Azure, Databricks, AWS, GCP) Strong SQL and proficiency in Python, Scala, or Java Experience with data modeling, warehousing, and large-scale data processing Familiarity with Medallion Architecture principles (bronze/silver/gold layers) Experience with data replication tools like Fivetran or HVR is a plus Strong analytical, problem-solving, and collaboration skills Proficiency with CI/CD (Azure DevOps, GitHub) is highly valued ABOUT OUR CLIENT Our client is a leading financial institution committed to enhancing customer experience and operational efficiency through innovative digital transformation projects. WHY ProFocus: Candidates come first. ProFocus is a six-time winner of Best in Staffing for Talent Satisfaction. Quality process. We invest the time to understand your background and career goals. We only introduce you to opportunities that are the right fit. Access to hiring managers. We have close relationships with some of the most respected local companies, from small businesses to Fortune 500 companies. These relationships give is access to roles that may not be available anywhere else. Excellent benefits. We offer medical, dental, vision, 401k match, education reimbursement, sick leave, and employer-paid disability and life insurance. Review our detailed benefits here. Want to learn more? Contact one of our recruiters here. Want to apply? Email your resume to Resume@ProFocusTechnology.com. Visit our Job Seekers page to learn more and review other opportunities. ProFocus is an equal opportunity employer. We value diversity in our workplace and encourage all qualified applicants regardless of race, color, age, sex, religion, national origin, physical or mental disability, pregnancy, marital status, veteran or military status, genetic information, sexual orientation, or any other characteristic protected by federal, state, or local laws.