

ThoughtStorm
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown," located in "unknown." Key skills include expertise in Apache Kafka, Databricks, AWS architecture, and designing data pipelines.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
April 10, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Washington DC-Baltimore Area
-
π§ - Skills detailed
#Data Integration #Databricks #Data Analysis #Data Pipeline #"ETL (Extract #Transform #Load)" #Apache Kafka #Data Engineering #Scala #Kafka (Apache Kafka) #API (Application Programming Interface) #AWS (Amazon Web Services) #Batch #Agile
Role description
The Senior Data Engineer will play a key role in building and scaling a robust data analytics platform using Databricks, supporting a highly visible online product. This individual will work across multiple source systems to transform how data is accessed, managed, and utilized across the organization, enabling data-driven decision-making.
The role operates within an agile software development environment and requires close collaboration with crossβfunctional teams, including data analysts, product teams, and engineering partners.
Required Technical Skills
Please only submit candidates who meet the following must-have requirements:
β’ Strong hands-on expertise with Apache Kafka
β’ Extensive experience with Databricks
β’ Deep knowledge of AWS architecture
β’ Proven experience designing and implementing robust data pipelines
β’ Experience with:
β’ Kafka streaming
β’ Batch ETL processes
β’ API-based data integrations
β’ Ability to optimize data infrastructure for quality, availability, and scalability
The Senior Data Engineer will play a key role in building and scaling a robust data analytics platform using Databricks, supporting a highly visible online product. This individual will work across multiple source systems to transform how data is accessed, managed, and utilized across the organization, enabling data-driven decision-making.
The role operates within an agile software development environment and requires close collaboration with crossβfunctional teams, including data analysts, product teams, and engineering partners.
Required Technical Skills
Please only submit candidates who meet the following must-have requirements:
β’ Strong hands-on expertise with Apache Kafka
β’ Extensive experience with Databricks
β’ Deep knowledge of AWS architecture
β’ Proven experience designing and implementing robust data pipelines
β’ Experience with:
β’ Kafka streaming
β’ Batch ETL processes
β’ API-based data integrations
β’ Ability to optimize data infrastructure for quality, availability, and scalability






