Sesheng

Senior Data Engineer (Purview Specialist)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Purview Specialist) on a long-term remote contract, focused on Azure Purview and data governance. Requires 8+ years in data engineering, 4+ years in Python/Scala, and expertise in big data architecture and T-SQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 19, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Azure cloud #Synapse #Scala #ML (Machine Learning) #Pytest #DataOps #Data Architecture #Swagger #Data Quality #Azure #Data Catalog #Data Engineering #Metadata #Kafka (Apache Kafka) #MS SQL (Microsoft SQL Server) #Data Governance #Python #Data Pipeline #Cloud #Data Management #Airflow #Programming #ADF (Azure Data Factory) #SQL Server #PySpark #AI (Artificial Intelligence) #DevOps #Spark (Apache Spark) #MySQL #API (Application Programming Interface) #dbt (data build tool) #SQL (Structured Query Language) #Big Data
Role description
Senior Data Engineer (Microsoft Purview Specialist) Location: Remote (Texas-Based Candidates Preferred) Contract Duration: Long-term We are seeking a seasoned #Purview Developer with a deep background in Data Engineering to lead the implementation and optimization of our data governance and cataloging strategies. You will be responsible for building robust data pipelines while ensuring our Azure Cloud environment adheres to the highest standards of metadata management and data quality. Must-Have Skill Sets & Experience To be successful in this role, you must possess the following: • Azure Purview Expertise: At least 2+ years of dedicated experience with Microsoft #Purview, focusing on data cataloging, governance, and metadata management within Azure Cloud or Hybrid environments. • Data Engineering Core: 8+ years of hands-on software/computer engineering experience, with a heavy focus on data-centric projects. • Advanced Programming: 4+ years of proficiency in Python or Scala. This includes core programming, PySpark, and experience creating/supporting Unit Tests (pytest) and User Defined Functions (UDFs). • Big Data Architecture: 4+ years building and optimizing large-scale data pipelines and architectures. You should be fluent in tools like ADF, #Synapse, Airflow, DBT, or Kafka. • Database Mastery: 4+ years of expert-level T-SQL experience (MS SQL Server, MySQL, etc.). • Ops Mindset: Proven experience implementing DevOps and DataOps practices to automate and streamline data delivery. Bonus Points • Data Quality: Experience with automated validation frameworks, specifically Great Expectations. • API Design: Familiarity with Swagger/OpenAPI for documenting and testing RESTful APIs. • AI/ML Integration: Hands-on experience with Machine Learning, AI, or Generative AI workflows. • Healthcare Data: Previous experience working with Epic Clarity, Caboodle, or OMOP data models. We are an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.