Open Systems Technologies

Data Engineer - Databricks

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Databricks with a contract length of unspecified duration, offering $100-130/hr. Requires a Bachelor's degree, 7+ years in enterprise solutions, and expertise in cloud databases, SQL, and data governance. Remote work only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
1040
-
🗓️ - Date
November 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Spark (Apache Spark) #RDS (Amazon Relational Database Service) #Unix #Data Warehouse #Data Pipeline #MIS Systems (Management Information Systems) #Data Engineering #Data Modeling #Server Administration #Informatica #Database Management #Teradata #AWS RDS (Amazon Relational Database Service) #Security #Python #Redshift #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Cloud #Data Governance #Databricks #Talend #Agile #Big Data #AWS (Amazon Web Services) #Scripting #Databases #Presto #BigQuery #Computer Science #Hadoop #SQL (Structured Query Language) #DynamoDB #Bash
Role description
A financial firm is looking for a Data Engineer - Databricks to join their team. This role is remote. Pay: $100-130/hr US Citizens or GC Holders Only No c2c - W2 Only Qualifications: A Bachelor's Degree in Computer Science, Management Information Systems, Computer Engineering, or related field 7+ years of experience in designing and building large-scale solutions in an enterprise setting 3+ years designing and building solutions in the cloud Expertise in building and managing Cloud databases such as AWS RDS, DynamoDB, DocumentDB or analogous architectures Expertise in building Cloud Database Management Systems in Databricks Lakehouse or analogous architectures Deep SQL expertise, data modeling, and experience with data governance in relational databases Experience with the practical application of data warehousing concepts, methodologies, and frameworks using traditional (Vertica, Teradata, etc.) and current (SparkSQL, Hadoop, Kafka) distributed technologies Refined skills using one or more scripting languages (e.g., Python, bash, etc.) Embrace data platform thinking, design and develop data pipelines keeping security, scale, uptime and reliability in mind Expertise in relational and dimensional data modeling UNIX admin and general server administration experience Experience with leveraging CI/CD pipelines Presto, Hive, SparkSQL, Cassandra, or Solr other Big Data query and transformation experience is a plus Experience using Spark, Kafka, Hadoop, or similar distributed data technologies is a plus Experience with Agile methodologies and able to work in an Agile manner is preferred Experience using ETL/ELT tools and technologies such as Talend, Informatica is a plus Expertise in Cloud Data Warehouses in Redshift, BigQuery or analogous architectures is a plus