

Precog Technologies
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience to design and maintain data pipelines in Databricks. The hybrid position in New York offers a pay rate of $114,590.43 - $138,001.38, requiring expertise in PySpark, SQL, and data quality tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
627
-
ποΈ - Date
March 5, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York, NY 10114
-
π§ - Skills detailed
#Data Science #PySpark #Scala #Data Quality #Spark (Apache Spark) #Spark SQL #Delta Lake #Web Scraping #Data Engineering #SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Pipeline #Databases #dbt (data build tool) #Databricks
Role description
We are seeking an experienced Data Engineer to design and maintain scalable data pipelines in Databricks. The role involves end-to-end ownership of data workflows, from ingestion to production, ensuring performance, quality, and reliability.
Responsibilities:
Build and optimize pipelines in Databricks (PySpark, SQL, Delta Lake).
Ingest data from databases, APIs, PDFs, Excel, flat files, and web scraping.
Implement data quality checks, validation, and testing frameworks.
Deploy, monitor, and optimize pipelines in Databricks/AWS.
Collaborate with data science/analytics teams and document data processes.
Qualifications :
8+ years of Data Engineering experience.
Hands-on expertise in Databricks (PySpark, SQL, Delta Lake).
Strong experience with structured & unstructured data.
Knowledge of data quality/testing tools (Great Expectations, Deequ, dbt tests).
Proven ability to optimize large-scale Spark/Databricks workloads.
Pay: $114,590.43 - $138,001.38 per year
Benefits:
401(k)
Dental insurance
Health insurance
Paid time off
Work Location: Hybrid remote in New York, NY 10114
We are seeking an experienced Data Engineer to design and maintain scalable data pipelines in Databricks. The role involves end-to-end ownership of data workflows, from ingestion to production, ensuring performance, quality, and reliability.
Responsibilities:
Build and optimize pipelines in Databricks (PySpark, SQL, Delta Lake).
Ingest data from databases, APIs, PDFs, Excel, flat files, and web scraping.
Implement data quality checks, validation, and testing frameworks.
Deploy, monitor, and optimize pipelines in Databricks/AWS.
Collaborate with data science/analytics teams and document data processes.
Qualifications :
8+ years of Data Engineering experience.
Hands-on expertise in Databricks (PySpark, SQL, Delta Lake).
Strong experience with structured & unstructured data.
Knowledge of data quality/testing tools (Great Expectations, Deequ, dbt tests).
Proven ability to optimize large-scale Spark/Databricks workloads.
Pay: $114,590.43 - $138,001.38 per year
Benefits:
401(k)
Dental insurance
Health insurance
Paid time off
Work Location: Hybrid remote in New York, NY 10114






