

CorSource
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for an initial 6-month contract, with a pay rate of "unknown." Key skills include data engineering, SQL, Snowflake, and Azure Data Factory. A bachelor's degree or equivalent experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
Portland, Oregon Metropolitan Area
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Leadership #Data Pipeline #Snowflake #Python #Documentation #Data Warehouse #Databases #Data Engineering #Version Control #ADF (Azure Data Factory) #Azure Data Factory #Monitoring #"ETL (Extract #Transform #Load)" #Security #Data Quality #Data Manipulation #Data Modeling #Data Lake #Azure #Data Integration #Datasets #Scala #BI (Business Intelligence)
Role description
This is an initial 6-month contract engagement, with the potential to extend based on business needs and performance.
Who We’re Looking For
You’re a data engineer who takes pride in building clean, reliable data pipelines—and you care just as much about trust in the data as you do about speed. You enjoy working as part of a data team, partnering closely with analytics and application teams, and translating real business needs into scalable, durable solutions. You’re comfortable owning moderate-scope initiatives (or meaningful portions of larger programs), and you bring a steady, accountable approach to quality, security, and delivery.
You’ll do well in this role if you:
• Enjoy solving messy data problems and leaving systems better than you found them
• Communicate clearly with both technical teammates and business stakeholders
• Take ownership of outcomes, not just assigned tasks
• Think holistically about reliability, SLAs, data quality, and security
• Work effectively on your own while collaborating naturally with a broader team
• Care about clean, maintainable, well-documented code and disciplined execution
What You Will Do
As part of an established Data & Analytics team, you’ll help design, build, and maintain the data foundation that supports analytics, reporting, and downstream applications. Your work will focus on integrating new data sources, modeling and transforming data for usability, and ensuring pipelines are stable, secure, and high quality. You’ll collaborate closely with program leadership and technical peers to assess requirements, design solutions, and deliver consistently.
Your responsibilities will include:
• Designing and building scalable, multi-tenant data systems that load and transform large volumes of structured and semi-structured data
• Partnering with Program and Project Managers to assess features, define technical approaches, and deliver end-to-end solutions (design, code, test, deploy)
• Building and maintaining data integration workflows, including data modeling and warehouse/analytics environment support
• Defining and managing SLAs and data quality standards for datasets and processes
• Implementing, testing, deploying, and maintaining secure, stable data pipelines—bringing new data sources into the central warehouse and delivering data to applications as needed
• Solving complex data and analytics problems that require adapting standard approaches and developing practical, durable solutions
• Applying industry best practices and validating internal customer needs to ensure solutions align with real business impact
• Performing one-off data manipulation and analysis as needed (cleaning, transforming, merging datasets, reshaping wide/long formats, etc.)
• Implementing and monitoring strong security controls within the data warehouse and analytics environment, accounting for an evolving threat landscape
• Leading and participating in design reviews and producing clear, concise documentation
• Thoroughly testing your own work, coordinating verification testing when others are involved, and reviewing peer solutions when required
Qualifications
We’re looking for someone with solid data engineering fundamentals who can contribute quickly, operate independently, and apply sound technical judgment in a collaborative environment.
Key qualifications include:
• Relevant experience in data engineering, analytics engineering, or BI engineering
• Bachelor’s degree in a related field or equivalent practical experience
• Hands-on experience with Snowflake, Azure Data Factory, and data modeling
• Coalesce.io experience preferred but not required
• Strong command of SQL and relational database best practices, including ETL/ELT workflows
• Solid understanding of data warehousing, data lakes, data pipelines, and BI applications
• Ability to interpret moderately complex requirements and apply appropriate design methodologies
• Strong data manipulation skills (ingest, clean, transform, merge datasets, reshape formats)
• Comfort learning new tools and troubleshooting independently; able to ramp quickly with minimal support
• Experience using APIs to push and pull data across systems and platforms
• Ability to write clear, well-documented code and manage it in a version control system
• Proficiency with Python, databases, and SQL
• Proven ability to work independently and collaboratively while following established procedures
• Strong listening, communication, and problem-solving skills
This is an initial 6-month contract engagement, with the potential to extend based on business needs and performance.
Who We’re Looking For
You’re a data engineer who takes pride in building clean, reliable data pipelines—and you care just as much about trust in the data as you do about speed. You enjoy working as part of a data team, partnering closely with analytics and application teams, and translating real business needs into scalable, durable solutions. You’re comfortable owning moderate-scope initiatives (or meaningful portions of larger programs), and you bring a steady, accountable approach to quality, security, and delivery.
You’ll do well in this role if you:
• Enjoy solving messy data problems and leaving systems better than you found them
• Communicate clearly with both technical teammates and business stakeholders
• Take ownership of outcomes, not just assigned tasks
• Think holistically about reliability, SLAs, data quality, and security
• Work effectively on your own while collaborating naturally with a broader team
• Care about clean, maintainable, well-documented code and disciplined execution
What You Will Do
As part of an established Data & Analytics team, you’ll help design, build, and maintain the data foundation that supports analytics, reporting, and downstream applications. Your work will focus on integrating new data sources, modeling and transforming data for usability, and ensuring pipelines are stable, secure, and high quality. You’ll collaborate closely with program leadership and technical peers to assess requirements, design solutions, and deliver consistently.
Your responsibilities will include:
• Designing and building scalable, multi-tenant data systems that load and transform large volumes of structured and semi-structured data
• Partnering with Program and Project Managers to assess features, define technical approaches, and deliver end-to-end solutions (design, code, test, deploy)
• Building and maintaining data integration workflows, including data modeling and warehouse/analytics environment support
• Defining and managing SLAs and data quality standards for datasets and processes
• Implementing, testing, deploying, and maintaining secure, stable data pipelines—bringing new data sources into the central warehouse and delivering data to applications as needed
• Solving complex data and analytics problems that require adapting standard approaches and developing practical, durable solutions
• Applying industry best practices and validating internal customer needs to ensure solutions align with real business impact
• Performing one-off data manipulation and analysis as needed (cleaning, transforming, merging datasets, reshaping wide/long formats, etc.)
• Implementing and monitoring strong security controls within the data warehouse and analytics environment, accounting for an evolving threat landscape
• Leading and participating in design reviews and producing clear, concise documentation
• Thoroughly testing your own work, coordinating verification testing when others are involved, and reviewing peer solutions when required
Qualifications
We’re looking for someone with solid data engineering fundamentals who can contribute quickly, operate independently, and apply sound technical judgment in a collaborative environment.
Key qualifications include:
• Relevant experience in data engineering, analytics engineering, or BI engineering
• Bachelor’s degree in a related field or equivalent practical experience
• Hands-on experience with Snowflake, Azure Data Factory, and data modeling
• Coalesce.io experience preferred but not required
• Strong command of SQL and relational database best practices, including ETL/ELT workflows
• Solid understanding of data warehousing, data lakes, data pipelines, and BI applications
• Ability to interpret moderately complex requirements and apply appropriate design methodologies
• Strong data manipulation skills (ingest, clean, transform, merge datasets, reshape formats)
• Comfort learning new tools and troubleshooting independently; able to ramp quickly with minimal support
• Experience using APIs to push and pull data across systems and platforms
• Ability to write clear, well-documented code and manage it in a version control system
• Proficiency with Python, databases, and SQL
• Proven ability to work independently and collaboratively while following established procedures
• Strong listening, communication, and problem-solving skills






