

Data Engineer (Druid) - REMOTE
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Druid) on a 6+ month remote contract, offering a pay rate of "TBD." Requires 5+ years in data engineering, proficiency in SQL, and experience with Druid. Familiarity with cloud platforms and big data technologies is beneficial.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 19, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Visualization #Programming #AWS (Amazon Web Services) #Docker #Cloud #Hadoop #Spark (Apache Spark) #Data Engineering #"ETL (Extract #Transform #Load)" #Kubernetes #Storage #Azure #Snowflake #Python #Java #SQL (Structured Query Language) #Data Modeling #Redshift #Big Data #Data Ingestion #CockroachDB #Kafka (Apache Kafka) #Data Aggregation #GCP (Google Cloud Platform) #Migration #Scala
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Two95 International, is seeking the following. Apply via Dice today!
Title - Data Engineer (Druid)
Location : Remote
Contract - 6+ Months Contract
Key Responsibilities
Design and Development:
Design and implement data ingestion pipelines to load data into Apache Druid.
Develop and optimize Druid schemas and data models for efficient querying and performance.
Design and implement data aggregations against event-level data in distributed database (preferably in Druid)
Migration from existing data stores to Druid
Optimization:
Implement best practices for data ingestion, storage, and querying to improve system performance.
Optimize query performance and resource utilization.
Key Qualifications
β’ 5+ years of experience in data engineering or analytics in distributed data systems, i.e., Druid, Snowflake, Redshift, CockroachDb, or Pinot (Druid experience preferred).
β’ Strong understanding of distributed data store architecture, data ingestion methods, scaling mechanisms and querying capabilities.
β’ Proficiency in SQL and experience with data modeling and ETL processes.
β’ Hands-on application coding experience in Java, Python or others.
β’ A fast learner of new technologies.
β’ Experience with performance tuning and optimization of data systems.
β’ Excellent problem-solving skills and the ability to work independently as well as in a team.
β’ Familiarity with other big data technologies (e.g., Hadoop, Spark, Kafka) is a plus.
Highly beneficial:
β’ Familiarity with cloud platforms (e.g., AWS, Google Cloud Platform, Azure) and containerization (e.g., Docker, Kubernetes).
β’ Knowledge of programming languages such as Scala.
β’ Experience generating reports against distributed data systems with SQL and visualization tools.