

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Databricks expertise, offering a long-term remote contract. Candidates must have 10+ years of experience, strong SQL skills, and .NET knowledge. No sponsorship available; familiarity with cloud platforms is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 6, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta, GA
-
π§ - Skills detailed
#Agile #ADF (Azure Data Factory) #Scripting #Data Security #SQL (Structured Query Language) #Security #AWS (Amazon Web Services) #Delta Lake #Azure Databricks #Cloud #Code Reviews #C# #Scala #Databricks #.Net #Microservices #Apache Spark #Computer Science #Python #Automation #GIT #"ETL (Extract #Transform #Load)" #Data Engineering #Data Ingestion #REST API #Azure #Synapse #Data Lake #REST (Representational State Transfer) #Data Modeling #Data Science #Spark (Apache Spark) #Compliance #Azure Data Factory #Documentation #GCP (Google Cloud Platform) #Data Pipeline #Data Governance #Version Control
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer with Databricks
100% Remote
No Sponsorship candidates
Long Term contract that will likely convert to an FTE
Job Summary:
We are seeking a skilled and motivated Data Engineer with hands-on experience in Databricks and foundational knowledge of .NET development. The ideal candidate will play a key role in building and optimizing data pipelines, designing scalable ETL/ELT workflows, and supporting our analytics infrastructure. This role bridges modern data engineering tools with application development, making it ideal for candidates with hybrid skill sets.
Key Responsibilities:
β’ Design, build, and maintain robust data pipelines using Databricks, Apache Spark, and Delta Lake.
β’ Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver efficient solutions.
β’ Optimize and monitor data workflows for performance, scalability, and reliability.
β’ Integrate structured and unstructured data sources into our data lake and warehouse environments.
β’ Develop custom components and services using .NET (C#) for data ingestion, APIs, or automation tasks as needed.
β’ Ensure data security, compliance, and governance standards are met.
β’ Participate in code reviews, testing, and documentation of engineering solutions.
β’ Collaborate in agile ceremonies and contribute to sprint planning and estimation.
Required Qualifications:
β’ Bachelorβs degree in Computer Science, Engineering, Information Systems, or related field.
β’ 10+ years of experience as a Data Engineer or in a similar role.
β’ Proficiency in Databricks, Apache Spark, and Delta Lake.
β’ Strong SQL skills and familiarity with query optimization techniques.
β’ Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure Databricks).
β’ Working knowledge of .NET / C# for service development or integration tasks.
β’ Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies
Preferred Qualifications:
β’ Experience with Azure Data Factory, Synapse, or other ETL tools.
β’ Exposure to REST APIs, Microservices, or data-driven application architecture.
β’ Knowledge of data warehousing, data modeling, and data governance principles.
Comfortable with scripting languages (e.g., Python, PowerShell) for automation.