ValueLabs

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer/Analyst with 5+ years of experience, offering a remote contract position. Key skills include T-SQL, PySpark, Azure Databricks, and Snowflake. Familiarity with data warehousing principles and Agile frameworks is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 23, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Azure Databricks #Vault #Data Profiling #Data Engineering #Data Lineage #PySpark #SQL (Structured Query Language) #Scripting #Data Design #Azure #Data Management #"ETL (Extract #Transform #Load)" #Scrum #SQL Server #Agile #BI (Business Intelligence) #Documentation #Data Architecture #Microsoft Power BI #ERWin #dbt (data build tool) #Python #Shell Scripting #Data Catalog #Data Governance #Data Vault #Metadata #Spark (Apache Spark) #Quality Assurance #Databricks #Snowflake #Data Analysis #Visualization #Cloud #Data Modeling
Role description
Role: Sr. Data Engineer / Analyst Experience: 5+ years Work location: Remote from US Job Summary: We are seeking a Data Engineer / Analyst to join our dynamic data architecture and analytics team. The ideal candidate will have strong hands-on experience with PySpark, T-SQL, and modern cloud-based data platforms such as Azure Databricks and Snowflake. This role requires end-to-end involvement in data analysis, transformation logic, functional data modeling, and lineage documentation. Understanding of data warehousing principles and Medallion Architecture is essential. Responsibilities: • Develop and maintain comprehensive data lineage documentation, SQL logic mapping, and technical design specifications to support seamless handoffs to Data Engineering and Data Governance teams. • Collaborate with Data Architects and Analysts to ensure data models align with enterprise architecture and business objectives. • Design, implement, and maintain efficient, reusable, and well-documented code in Python, T-SQL, and Shell Scripting. • Partner with Business Intelligence SMEs and stakeholders to gather, define, and standardize functional and business-level data definitions. • Perform detailed data profiling, validation, and quality assurance across structured and unstructured data sources. • Participate in data discovery and functional analysis to inform data design decisions and support timely solution delivery. Requirements: • Minimum 5 years of experience in data engineering, analytics, or a related field. • Proven expertise in T-SQL, PySpark, dbt, and Azure Databricks (including Spark Structured Streaming). • Hands-on experience with SQL Server, Snowflake, Azure, or Microsoft Fabric. • Familiarity with data modeling tools such as Erwin Data Modeler. • Working knowledge of Power BI or equivalent BI/visualization tools. • Solid understanding of data governance, metadata management, and data lineage principles. • Experience working within Agile/Scrum delivery frameworks. • Exposure to enterprise data warehousing methodologies such as Data Vault 2.0 or Kimball. • Ability to translate complex business needs into clear, actionable technical solutions through structured analysis. Preferred Qualifications (Nice to have): • Experience supporting Medallion Architecture-based data platforms. • Understanding of semantic layer design and data product delivery in modern BI environments. • Exposure to data cataloging tools and metadata repositories (e.g., Azure Purview).