KAnand Corporation

Sr Data Modeler (Data Bricks , Pyspark, Delta Lake ,Cloud Platform ) : 15+ Years

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Modeler with 15+ years of experience, focusing on Databricks, Pyspark, and Delta Lake. The contract lasts 12 months, located in Houston, Texas, requiring expertise in cloud platforms and data architecture.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 15, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Data Architecture #Data Bricks #Data Management #DevOps #SQL (Structured Query Language) #Data Governance #dbt (data build tool) #Apache Spark #Data Engineering #Delta Lake #Spark (Apache Spark) #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Compliance #Metadata #Data Lineage #Big Data #Physical Data Model #AWS (Amazon Web Services) #Data Modeling #Data Pipeline #Agile #Cloud #Computer Science #PySpark #Azure #Leadership #Databricks
Role description
Position : Sr Data Modeler (Data bricks , Pyspark, Delta Lake ,Cloud Platform ) : Visa Independent Candidate Location : Houston Texas ( Day 1 Onsite / Hybrid/ Remote) Duration : Contract: 12 Months Experience : 15+ Years "Visa Independent Candidate are highly encouraged to apply for this position. We are not sponsoring at this point" Job description for data modeler β€’ Lead the design and development of enterprise-level conceptual, logical, and physical data models. β€’ Architect and optimize data pipelines and models using Databricks, Delta Lake, and Lakehouse architecture. β€’ Collaborate with data engineers, analysts, and business stakeholders to translate complex requirements into robust data solutions. β€’ Define and enforce data modeling standards, governance, and best practices. β€’ Conduct performance tuning and optimization of large-scale data systems. β€’ Oversee metadata management and ensure data lineage and traceability. β€’ Provide technical leadership and mentorship to junior data modelers and engineers. β€’ Stay current with emerging data technologies and recommend strategic improvements. Qualification: β€’ Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field. β€’ 10+ years of experience in data modeling, data architecture, and data engineering. β€’ Proven expertise in Databricks, Apache Spark, and SQL. β€’ Strong experience with cloud platforms (Azure, AWS, or GCP). β€’ Deep understanding of data warehousing, ETL/ELT processes, and big data ecosystems. β€’ Familiarity with Delta Lake, Unity Catalog, and Lakehouse principles. β€’ Excellent communication, leadership, and stakeholder management skills. Preferred Skills: β€’ Experience with modeling tools like ER/Studio, PowerDesigner, or dbt. β€’ Proficiency in Pyspark β€’ Knowledge of data governance frameworks and compliance standards. β€’ Agile and DevOps experience in data-centric environments.