ATC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position in Georgia (Hybrid) for a contract of unspecified length, offering a pay rate of "unknown." Candidates should have 2–3 years of experience, strong skills in SQL, Python, and Spark, and knowledge of Microsoft Fabric and Azure Databricks.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 7, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta Metropolitan Area
-
🧠 - Skills detailed
#Spark (Apache Spark) #Dataflow #"ETL (Extract #Transform #Load)" #DevOps #Azure #SQL (Structured Query Language) #Data Governance #SQL Server #Datasets #Migration #Microsoft Power BI #SSIS (SQL Server Integration Services) #Azure Databricks #Azure DevOps #BI (Business Intelligence) #Data Engineering #Python #Automation #Scala #Data Analysis #Data Quality #Data Pipeline #Databricks #Cloud #Logging #Computer Science #Monitoring
Role description
Job Title: Data Engineer Location: Georgia (Hybrid Onsite) Position Type: Contract Experience Required: 2–3 Years Position Overview We are seeking a motivated Data Engineer to support the modernization and transformation of our data estate. The ideal candidate will assist in developing scalable data pipelines, migrating legacy systems, and implementing modern data solutions using Microsoft Fabric, Azure, and related tools. This role offers the opportunity to work with cutting-edge cloud technologies and contribute to building a secure, efficient, and future-ready data environment. Key Responsibilities • Assist in designing, developing, and maintaining ETL/ELT data pipelines using Microsoft Fabric and Azure Databricks • Support the migration and maintenance of SSIS packages from legacy systems to modern platforms • Implement Medallion Architecture (Bronze, Silver, Gold) to enhance data quality, governance, and lifecycle management • Develop and manage notebooks (Fabric Notebooks, Databricks) for data transformation using Python, SQL, and Spark • Build curated datasets to support Power BI reporting and analytics • Collaborate with data analysts, BI developers, and business stakeholders to deliver fit-for-purpose data products • Apply data governance best practices leveraging Microsoft Purview or Unity Catalog • Assist with monitoring, logging, and CI/CD automation using Azure DevOps Technical Skills & Tools • Microsoft Fabric (Dataflows, Pipelines, Notebooks, OneLake) • Azure Databricks • SQL Server / SQL Managed Instances • Power BI • SSIS (for migration and ongoing maintenance) • LangGraph & RAG DB (for advanced data workflows) Qualifications Required: • Bachelor’s degree in Computer Science, Information Systems, or a related field • 2–3 years of hands-on experience in data engineering or a related technical role • Strong proficiency in SQL, Python, and Spark • Working knowledge of LangGraph and RAG DB concepts • Experience with Microsoft Fabric and Power BI • Understanding of ETL/ELT pipelines and data warehousing fundamentals Preferred: • Exposure to CI/CD automation using Azure DevOps • Familiarity with data governance tools (Microsoft Purview, Unity Catalog) • Experience in migrating and maintaining SSIS packages