KPI Partners

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect on a 6-month contract in Plano, TX (Hybrid). Key skills include SQL, PySpark, Databricks, and Azure Cloud. Requires 12+ years in IT, 4+ years in data engineering, and strong ETL and data warehousing knowledge.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Plano, TX
-
🧠 - Skills detailed
#Azure cloud #PySpark #Snowflake #Consulting #Scala #Spark SQL #Spark (Apache Spark) #Cloud #Azure #SQL (Structured Query Language) #AWS (Amazon Web Services) #Databricks #Data Pipeline #AI (Artificial Intelligence) #Data Modeling #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Data Engineering #Data Architecture #Datasets
Role description
About KPI Partners KPI Partners is a 5 times Gartner-recognized data, analytics, and AI consulting company. We are leaders in data engineering on Azure, AWS, Google, Snowflake, and Databricks. Founded in 2006, KPI has over 400 consultants and has successfully delivered over 1,000 projects to our clients. We are looking for skilled data engineers who want to work with the best team in data engineering. Title: Senior Data Architect Location: Plano, TX (Hybrid) Job Type: Contract – 6 Months Key Skills: SQL, PySpark, Databricks, and Azure Cloud Key Note: Looking for a Data Architect who is Hands-on with SQL, PySpark, Databricks, and Azure Cloud. About the Role: We are seeking a highly skilled and experienced Senior Data Architect to join our dynamic team at KPI, working on challenging and multi-year data transformation projects. This is an excellent opportunity for a talented data engineer to play a key role in building innovative data solutions using Azure Native Services and related technologies. If you are passionate about working with large-scale data systems and enjoy solving complex engineering problems, this role is for you. Key Responsibilities: β€’ Data Engineering: Design, development, and implementation of data pipelines and solutions using PySpark, SQL, and related technologies. β€’ Collaboration: Work closely with cross-functional teams to understand business requirements and translate them into robust data solutions. β€’ Data Warehousing: Design and implement data warehousing solutions, ensuring scalability, performance, and reliability. β€’ Continuous Learning: Stay up to date with modern technologies and trends in data engineering and apply them to improve our data platform. β€’ Mentorship: Provide guidance and mentorship to junior data engineers, ensuring best practices in coding, design, and development. Must-Have Skills & Qualifications: β€’ Minimum 12+ years of overall experience in IT Industry. β€’ 4+ years of experience in data engineering, with a strong background in building large-scale data solutions. β€’ 4+ years of hands-on experience developing and implementing data pipelines using Azure stack experience (Azure, ADF, Databricks, Functions) β€’ Proven expertise in SQL for querying, manipulating, and analyzing large datasets. β€’ Strong knowledge of ETL processes and data warehousing fundamentals. β€’ Self-motivated and independent, with a β€œlet’s get this done” mindset and the ability to thrive in a fast-paced and dynamic environment. Good-to-Have Skills: β€’ Databricks Certification is a plus. β€’ Data Modeling, Azure Architect Certification.