

CloudPoint America
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with over 5 years of experience, focusing on Databricks (PySpark, SQL, Delta Lake). It offers a hybrid work location in New York, Dallas, or Chicago, with a contract duration of more than 6 months.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 26, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY 11226
-
🧠 - Skills detailed
#Scala #Data Pipeline #AWS (Amazon Web Services) #PySpark #Data Engineering #Data Science #Databases #dbt (data build tool) #Data Quality #SQL (Structured Query Language) #Delta Lake #Web Scraping #Spark (Apache Spark) #Databricks #Spark SQL
Role description
Title: Data EngineerLocation: New York,Dallas,Chicago (Hybrid)
We are seeking an experienced Data Engineer to design and maintain scalable data pipelines in Databricks. The role involves end-to-end ownership of data workflows, from ingestion to production, ensuring performance, quality, and reliability.
Responsibilities
Build and optimize pipelines in Databricks (PySpark, SQL, Delta Lake).
Ingest data from databases, APIs, PDFs, Excel, flat files, and web scraping.
Implement data quality checks, validation, and testing frameworks.
Deploy, monitor, and optimize pipelines in Databricks/AWS.
Collaborate with data science/analytics teams and document data processes.
Qualifications
5+ years of Data Engineering experience.
Hands-on expertise in Databricks (PySpark, SQL, Delta Lake).
Strong experience with structured & unstructured data.
Knowledge of data quality/testing tools (Great Expectations, Deequ, dbt tests).
Proven ability to optimize large-scale Spark/Databricks workloads.
Email :shraddha.m@datasysamerica.com
Job Types: Full-time, Contract
Work Location: In person
Title: Data EngineerLocation: New York,Dallas,Chicago (Hybrid)
We are seeking an experienced Data Engineer to design and maintain scalable data pipelines in Databricks. The role involves end-to-end ownership of data workflows, from ingestion to production, ensuring performance, quality, and reliability.
Responsibilities
Build and optimize pipelines in Databricks (PySpark, SQL, Delta Lake).
Ingest data from databases, APIs, PDFs, Excel, flat files, and web scraping.
Implement data quality checks, validation, and testing frameworks.
Deploy, monitor, and optimize pipelines in Databricks/AWS.
Collaborate with data science/analytics teams and document data processes.
Qualifications
5+ years of Data Engineering experience.
Hands-on expertise in Databricks (PySpark, SQL, Delta Lake).
Strong experience with structured & unstructured data.
Knowledge of data quality/testing tools (Great Expectations, Deequ, dbt tests).
Proven ability to optimize large-scale Spark/Databricks workloads.
Email :shraddha.m@datasysamerica.com
Job Types: Full-time, Contract
Work Location: In person





