

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10-12+ years of experience, focusing on Databricks Unity Catalog. It offers a 12-month remote contract with a pay rate DOE. Key skills include Python, SQL, Azure, and data pipeline orchestration.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 1, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Lake #Databases #Data Pipeline #SQL (Structured Query Language) #DevOps #ADF (Azure Data Factory) #Data Architecture #Data Governance #Spark (Apache Spark) #PySpark #Data Quality #Python #Data Engineering #Databricks #Scala #Azure #Azure Data Factory #Data Integration #"ETL (Extract #Transform #Load)" #Azure DevOps #Storage #Strategy #Data Storage #Programming #Delta Lake #NoSQL #Cloud #Spark SQL #Data Extraction
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Sr. Data Engineer with Unity Catalog
Location: Remote
Duration: 12 months
Rate: DOE
U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time
Job Description:
Core skills
β’ Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
β’ Need to coordinate with Offshore
β’ 10-12+ years of experience in data engineering or a related field
β’ Expertise with programming languages such as Python/PySpark, SQL, or Scala
β’ Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
β’ . Hands-on experience with Databricks Cloud Data Platforms Required.
β’ -Should have experience migrating to Unity Catalog.
β’ Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines) Required
Responsibilities
β’ Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files.
β’ Develop and support data pipelines within a Cloud Data Platform, such as Databricks
β’ Build data models that reflect domain expertise, meet current business needs, and will remain flexible as strategy evolves
β’ Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
β’ Demonstrates ability to communicate technical concepts to non-technical audiences both in written and verbal form.
β’ Demonstrates strong understanding with coding and programming concepts to build data pipelines (e.g. data transformation, data quality, data integration, etc.). . Demonstrates strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing).
β’ Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning
β’ Automate CI/CD pipelines for data workflows using Azure DevOps.