Data Engineer - Databricks

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Databricks with a contract length of "unknown" and a pay rate of $100-130/hr. Remote work is available. Requires a Bachelor's degree, 7+ years in enterprise solutions, and cloud database expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
1040
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Server Administration #Data Pipeline #AWS RDS (Amazon Relational Database Service) #Computer Science #DynamoDB #Teradata #Python #Agile #Database Management #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Engineering #Databases #SQL (Structured Query Language) #Hadoop #Big Data #Cloud #"ETL (Extract #Transform #Load)" #Bash #Presto #Unix #Talend #Data Warehouse #BigQuery #Redshift #AWS (Amazon Web Services) #Data Governance #Security #Data Modeling #RDS (Amazon Relational Database Service) #Databricks #Scripting #MIS Systems (Management Information Systems) #Informatica
Role description
A financial firm is looking for a Data Engineer - Databricks to join their team. This role is remote. Pay: $100-130/hr US Citizens or GC Holders Only No c2c - W2 Only Qualifications: A Bachelor's Degree in Computer Science, Management Information Systems, Computer Engineering, or related field 7+ years of experience in designing and building large-scale solutions in an enterprise setting 3+ years designing and building solutions in the cloud Expertise in building and managing Cloud databases such as AWS RDS, DynamoDB, DocumentDB or analogous architectures Expertise in building Cloud Database Management Systems in Databricks Lakehouse or analogous architectures Deep SQL expertise, data modeling, and experience with data governance in relational databases Experience with the practical application of data warehousing concepts, methodologies, and frameworks using traditional (Vertica, Teradata, etc.) and current (SparkSQL, Hadoop, Kafka) distributed technologies Refined skills using one or more scripting languages (e.g., Python, bash, etc.) Embrace data platform thinking, design and develop data pipelines keeping security, scale, uptime and reliability in mind Expertise in relational and dimensional data modeling UNIX admin and general server administration experience Experience with leveraging CI/CD pipelines Presto, Hive, SparkSQL, Cassandra, or Solr other Big Data query and transformation experience is a plus Experience using Spark, Kafka, Hadoop, or similar distributed data technologies is a plus Experience with Agile methodologies and able to work in an Agile manner is preferred Experience using ETL/ELT tools and technologies such as Talend, Informatica is a plus Expertise in Cloud Data Warehouses in Redshift, BigQuery or analogous architectures is a plus