

Sr. Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks Data Engineer in Portland, OR, with a contract length of "unknown" and a pay rate of "unknown." Requires 10+ years of experience, retail industry expertise, proficiency in Python, SQL, Databricks, Snowflake, and ETL tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Portland, Oregon Metropolitan Area
-
π§ - Skills detailed
#Cloud #AWS Glue #Azure #RDBMS (Relational Database Management System) #Schema Design #Alteryx #Data Quality #Lambda (AWS Lambda) #NoSQL #NumPy #Apache Airflow #Databricks #SQL (Structured Query Language) #Triggers #IAM (Identity and Access Management) #Data Pipeline #Libraries #Scala #PySpark #Data Engineering #Azure Data Factory #Pandas #Snowflake #SQL Queries #REST (Representational State Transfer) #REST API #Talend #MongoDB #Spark (Apache Spark) #Python #JSON (JavaScript Object Notation) #Programming #EC2 #Athena #DynamoDB #AWS (Amazon Web Services) #Hadoop #Data Processing #Redis #Airflow #Security #S3 (Amazon Simple Storage Service) #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #ML (Machine Learning) #AI (Artificial Intelligence) #Databases #MySQL #Big Data
Role description
Job Title: Sr. Databricks Data Engineer
Location: Portland, OR (Local or Willing to relocate)
Job Description:
We are seeking a highly skilled Databricks Data Engineer with a minimum of 10 years of total experience, including strong expertise in the retail industry. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role requires proficiency in Python, SQL, cloud platforms, and ETL tools within a retail-focused data ecosystem.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Databricks and Snowflake.
β’ Work with Python libraries such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and JSON for efficient data processing.
β’ Optimize and enhance SQL queries, stored procedures, triggers, and schema designs for RDBMS (MSSQL/MySQL) and NoSQL (DynamoDB/MongoDB/Redis) databases.
β’ Develop and manage REST APIs to integrate various data sources and applications.
β’ Implement AWS cloud solutions using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, and EMR.
β’ Utilize ETL tools such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, and Alteryx to orchestrate and automate data workflows.
β’ Work with Hadoop and Hive for big data processing and analysis.
β’ Collaborate with cross-functional teams to understand business needs and develop efficient data solutions that drive decision-making in the retail domain.
β’ Ensure data quality, governance, and security across all data assets and pipelines.
Required Qualifications:
β’ 10+ years of total experience in data engineering and data processing.
β’ 6+ years of hands-on experience in Python programming, specifically for data processing and analytics.
β’ 4+ years of experience working with Databricks and Snowflake.
β’ 4+ years of expertise in SQL development, performance tuning, and RDBMS/NoSQL databases.
β’ 4+ years of experience in designing and managing REST APIs.
β’ 2+ years of working experience in AWS data services.
β’ 2+ years of hands-on experience with ETL tools like Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
β’ 1+ year experience with Hadoop and Hive.
β’ Strong understanding of retail industry data needs and best practices.
β’ Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
β’ Experience with real-time data processing and streaming technologies.
β’ Familiarity with machine learning and AI-driven analytics.
β’ Certifications in Databricks, AWS, or Snowflake.
This is an exciting opportunity to work on cutting-edge data engineering solutions in a fast-paced retail environment. If you are passionate about leveraging data to drive business success and innovation, we encourage you to apply!
Job Title: Sr. Databricks Data Engineer
Location: Portland, OR (Local or Willing to relocate)
Job Description:
We are seeking a highly skilled Databricks Data Engineer with a minimum of 10 years of total experience, including strong expertise in the retail industry. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role requires proficiency in Python, SQL, cloud platforms, and ETL tools within a retail-focused data ecosystem.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Databricks and Snowflake.
β’ Work with Python libraries such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and JSON for efficient data processing.
β’ Optimize and enhance SQL queries, stored procedures, triggers, and schema designs for RDBMS (MSSQL/MySQL) and NoSQL (DynamoDB/MongoDB/Redis) databases.
β’ Develop and manage REST APIs to integrate various data sources and applications.
β’ Implement AWS cloud solutions using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, and EMR.
β’ Utilize ETL tools such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, and Alteryx to orchestrate and automate data workflows.
β’ Work with Hadoop and Hive for big data processing and analysis.
β’ Collaborate with cross-functional teams to understand business needs and develop efficient data solutions that drive decision-making in the retail domain.
β’ Ensure data quality, governance, and security across all data assets and pipelines.
Required Qualifications:
β’ 10+ years of total experience in data engineering and data processing.
β’ 6+ years of hands-on experience in Python programming, specifically for data processing and analytics.
β’ 4+ years of experience working with Databricks and Snowflake.
β’ 4+ years of expertise in SQL development, performance tuning, and RDBMS/NoSQL databases.
β’ 4+ years of experience in designing and managing REST APIs.
β’ 2+ years of working experience in AWS data services.
β’ 2+ years of hands-on experience with ETL tools like Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
β’ 1+ year experience with Hadoop and Hive.
β’ Strong understanding of retail industry data needs and best practices.
β’ Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
β’ Experience with real-time data processing and streaming technologies.
β’ Familiarity with machine learning and AI-driven analytics.
β’ Certifications in Databricks, AWS, or Snowflake.
This is an exciting opportunity to work on cutting-edge data engineering solutions in a fast-paced retail environment. If you are passionate about leveraging data to drive business success and innovation, we encourage you to apply!