

Open Systems Technologies
Data Engineer - Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Databricks with a contract length of unspecified duration, offering $100-130/hr. Candidates must have a Bachelor's degree, 7+ years of enterprise experience, and strong cloud database skills, particularly in Databricks.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
1040
-
🗓️ - Date
October 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Server Administration #SQL (Structured Query Language) #Talend #Scripting #Hadoop #Database Management #Databases #Cloud #Data Governance #Presto #BigQuery #Teradata #Redshift #Computer Science #Data Engineering #Spark (Apache Spark) #Data Pipeline #RDS (Amazon Relational Database Service) #MIS Systems (Management Information Systems) #Data Modeling #Unix #Big Data #Security #AWS RDS (Amazon Relational Database Service) #DynamoDB #Data Warehouse #Bash #Agile #Kafka (Apache Kafka) #Informatica #"ETL (Extract #Transform #Load)" #Databricks #Python
Role description
A financial firm is looking for a Data Engineer - Databricks to join their team. This role is remote.
Pay: $100-130/hr
US Citizens or GC Holders Only
No c2c - W2 Only
Qualifications: A Bachelor's Degree in Computer Science, Management Information Systems, Computer Engineering, or related field 7+ years of experience in designing and building large-scale solutions in an enterprise setting 3+ years designing and building solutions in the cloud Expertise in building and managing Cloud databases such as AWS RDS, DynamoDB, DocumentDB or analogous architectures Expertise in building Cloud Database Management Systems in Databricks Lakehouse or analogous architectures Deep SQL expertise, data modeling, and experience with data governance in relational databases Experience with the practical application of data warehousing concepts, methodologies, and frameworks using traditional (Vertica, Teradata, etc.) and current (SparkSQL, Hadoop, Kafka) distributed technologies Refined skills using one or more scripting languages (e.g., Python, bash, etc.) Embrace data platform thinking, design and develop data pipelines keeping security, scale, uptime and reliability in mind Expertise in relational and dimensional data modeling UNIX admin and general server administration experience Experience with leveraging CI/CD pipelines Presto, Hive, SparkSQL, Cassandra, or Solr other Big Data query and transformation experience is a plus Experience using Spark, Kafka, Hadoop, or similar distributed data technologies is a plus Experience with Agile methodologies and able to work in an Agile manner is preferred Experience using ETL/ELT tools and technologies such as Talend, Informatica is a plus Expertise in Cloud Data Warehouses in Redshift, BigQuery or analogous architectures is a plus
A financial firm is looking for a Data Engineer - Databricks to join their team. This role is remote.
Pay: $100-130/hr
US Citizens or GC Holders Only
No c2c - W2 Only
Qualifications: A Bachelor's Degree in Computer Science, Management Information Systems, Computer Engineering, or related field 7+ years of experience in designing and building large-scale solutions in an enterprise setting 3+ years designing and building solutions in the cloud Expertise in building and managing Cloud databases such as AWS RDS, DynamoDB, DocumentDB or analogous architectures Expertise in building Cloud Database Management Systems in Databricks Lakehouse or analogous architectures Deep SQL expertise, data modeling, and experience with data governance in relational databases Experience with the practical application of data warehousing concepts, methodologies, and frameworks using traditional (Vertica, Teradata, etc.) and current (SparkSQL, Hadoop, Kafka) distributed technologies Refined skills using one or more scripting languages (e.g., Python, bash, etc.) Embrace data platform thinking, design and develop data pipelines keeping security, scale, uptime and reliability in mind Expertise in relational and dimensional data modeling UNIX admin and general server administration experience Experience with leveraging CI/CD pipelines Presto, Hive, SparkSQL, Cassandra, or Solr other Big Data query and transformation experience is a plus Experience using Spark, Kafka, Hadoop, or similar distributed data technologies is a plus Experience with Agile methodologies and able to work in an Agile manner is preferred Experience using ETL/ELT tools and technologies such as Talend, Informatica is a plus Expertise in Cloud Data Warehouses in Redshift, BigQuery or analogous architectures is a plus