

Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer in Portland, with a contract length of "unknown" and a pay rate of "unknown." It requires 10+ years of experience, expertise in retail data, proficiency in Python, SQL, Databricks, and AWS services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Portland, OR
-
π§ - Skills detailed
#MySQL #Airflow #Azure Data Factory #Data Engineering #Schema Design #NoSQL #Redis #Talend #REST (Representational State Transfer) #REST API #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Databases #Data Processing #Scala #Data Quality #Spark (Apache Spark) #JSON (JavaScript Object Notation) #Programming #AWS Glue #BI (Business Intelligence) #EC2 #Snowflake #NumPy #MongoDB #ML (Machine Learning) #Big Data #Cloud #Alteryx #AI (Artificial Intelligence) #Azure #DynamoDB #Python #Libraries #SQL Queries #Data Pipeline #AWS (Amazon Web Services) #PySpark #Apache Airflow #SQL (Structured Query Language) #Databricks #Pandas #Athena #Hadoop #IAM (Identity and Access Management) #S3 (Amazon Simple Storage Service) #Triggers #Security #RDBMS (Relational Database Management System) #ADF (Azure Data Factory)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
β’
β’ This role needs someone local in Portland, with strong background in Data Modelling along with extensive experience in Databricks and Hackolade.
β’
β’ Job Description:
We are seeking a highly skilled Databricks Data Engineer with a minimum of 10 years of total experience, including strong expertise in the retail industry. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role requires proficiency in Python, SQL, cloud platforms, and ETL tools within a retail-focused data ecosystem.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Databricks and Snowflake.
β’ Work with Python libraries such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and JSON for efficient data processing.
β’ Optimize and enhance SQL queries, stored procedures, triggers, and schema designs for RDBMS (MSSQL/MySQL) and NoSQL (DynamoDB/MongoDB/Redis) databases.
β’ Develop and manage REST APIs to integrate various data sources and applications.
β’ Implement AWS cloud solutions using AWS Data Exchange, Athena, Cloud Formation, Lambda, S3, AWS Console, IAM, STS, EC2, and EMR.
β’ Utilize ETL tools such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, and Alteryx to orchestrate and automate data workflows.
β’ Work with Hadoop and Hive for big data processing and analysis.
β’ Collaborate with cross-functional teams to understand business needs and develop efficient data solutions that drive decision-making in the retail domain.
β’ Ensure data quality, governance, and security across all data assets and pipelines.
Required Qualifications:
β’ 10+ years of total experience in data engineering and data processing.
β’ 6+ years of hands-on experience in Python programming, specifically for data processing and analytics.
β’ 4+ years of experience working with Databricks and Snowflake.
β’ 4+ years of expertise in SQL development, performance tuning, and RDBMS/NoSQL databases.
β’ 4+ years of experience in designing and managing REST APIs.
β’ 2+ years of working experience in AWS data services.
β’ 2+ years of hands-on experience with ETL tools like Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
β’ 1+ year experience with Hadoop and Hive.
β’ Strong understanding of retail industry data needs and best practices.
β’ Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
β’ Experience with real-time data processing and streaming technologies.
β’ Familiarity with machine learning and AI-driven analytics.
β’ Certifications in Databricks, AWS, or Snowflake.
This is an exciting opportunity to work on cutting-edge data engineering solutions in a fast-paced retail environment. If you are passionate about leveraging data to drive business success and innovation, we encourage you to apply!