Robert Half

Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Databricks, Python, and cloud data architecture. Experience with data migration from RDBMS to cloud platforms is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 31, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Data Extraction #Computer Science #DevOps #Data Science #Python #Compliance #Leadership #Databricks #Data Integrity #Data Architecture #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Quality #Data Warehouse #Kubernetes #Security #Migration #Data Engineering #Strategy #Cloud #Data Migration #Docker #Data Governance #Scripting #Data Modeling #RDBMS (Relational Database Management System) #Scala #SQL (Structured Query Language)
Role description
This is a great opportunity to join one of the largest integrated energy and commodity trading companies in the world. We are seeking a highly skilled and experienced Data Architect with strong expertise in Databricks and Python development. The ideal candidate will have a deep understanding of modern data architecture principles and experience with data migration from traditional RDBMS systems to cloud-based platforms. You will work closely with end-users to understand business processes, gather requirements, and design solutions that deliver tangible value. Key Responsibilities β€’ Design, develop, and implement robust data architecture solutions utilizing modern cloud data platforms like Databricks. β€’ Ensure scalable, reliable, and secure data environments that support advanced analytics and meet business requirements. β€’ Lead the migration of data from traditional RDBMS systems to Databricks environments. β€’ Architect and design scalable data pipelines and infrastructure to support the organization’s data needs. β€’ Develop and manage ETL processes using Databricks, ensuring efficient data extraction, transformation, and loading. β€’ Optimize ETL workflows for performance and maintain data integrity. β€’ Ensure seamless data transitions with minimal disruption to operations. β€’ Monitor and tune performance of data systems to ensure scalability, reliability, and cost-effectiveness. β€’ Collaborate with cross-functional teams including data engineers, data scientists, analysts, and product managers. β€’ Define and enforce best practices and standards for data engineering and data warehouse processes. β€’ Evaluate and implement new technologies and tools to improve the efficiency and reliability of data pipelines. β€’ Provide technical leadership and mentorship to junior data team members. β€’ Partner with DevOps and infrastructure teams to deploy and manage data systems in production. β€’ Ensure compliance with data governance, security, and regulatory standards. β€’ Align data architecture with enterprise architecture and IT strategy. Skills and Experience β€’ BS or MS in Computer Science or a related field. β€’ Extensive experience with Databricks, including ETL processes and cloud-based data architecture. β€’ Strong proficiency in Python development, including scripting and building data pipelines. β€’ Experience with cloud platforms such as AWS. β€’ Strong background in data warehousing, data modeling, and advanced SQL. β€’ Experience with traditional RDBMS systems and transitioning data to cloud-native platforms. β€’ Knowledge of data governance, data quality management, and security principles. β€’ Familiarity with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes). β€’ Excellent analytical, problem-solving, and communication skills. β€’ Ability to work effectively across geographically distributed, cross-functional teams.