

Analytics Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Analytics Engineer in Atlanta, GA, lasting 10+ months, with a pay rate of "unknown." Requires 3+ years in data engineering, proficiency in SQL, Azure Databricks, Microsoft Fabric, and experience in ETL/ELT pipelines and data modeling.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
July 17, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta, GA
-
π§ - Skills detailed
#Data Quality #Dataflow #Monitoring #Python #Data Pipeline #Classification #Delta Lake #Data Engineering #Compliance #Batch #Azure #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Databricks #Data Management #Metadata #Pandas #SQL (Structured Query Language) #Data Lifecycle #Azure ADLS (Azure Data Lake Storage) #Data Governance #DevOps #Azure DevOps #Scala #Microsoft Power BI #Logging #Datasets #Security #Strategy #"ACID (Atomicity #Consistency #Isolation #Durability)" #BI (Business Intelligence) #Azure Databricks #Data Modeling #PySpark #ADLS (Azure Data Lake Storage)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Analytics Engineer Location: Atlanta, GA Duration: 10+ Months Job Description:
The Analytics Engineer will contribute to our modern data estate strategy by developing scalable data solutions using Microsoft Fabric and Azure Databricks.
This role will be instrumental in building resilient data pipelines, transforming raw data into curated datasets, and delivering analytics-ready models that support enterprise-level reporting and decision-making.
Build and maintain ETL/ELT pipelines using Azure Databricks and Microsoft Fabric.
Implement medallion architecture (Bronze, Silver, Gold layers) to support data lifecycle and quality.
Develop real-time and batch ingestion processes from IES Gateway and other source systems.
Ensure data quality, validation, and transformation logic is consistently applied.
Use Python, Spark, and SQL in Databricks and Fabric notebooks for data transformation.
Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
Integration with Azure Services: Integrating Databricks with other Azure services like Azure One Lake, Azure ADLS Gen2, and Microsoft fabric.
Collaborate with the Domain Owners to design dimensional and real-time data models.
Create analytics-ready datasets for Power BI and other reporting tools
Standardize field naming conventions and schema definitions across datasets.
Apply data classification and tagging based on DECAL's data governance framework.
Implement row-level security, data masking, and audit logging as per compliance requirements.
Support integration with Microsoft Purview for lineage and metadata management.
Dimensional modeling
Real-time data modeling patterns
Partner with BI developers to ensure data models are optimized for Power BI.
Provide curated datasets that align with reporting requirements and business logic.
Create BI dashboards and train users.
Support CI/CD pipelines for data workflows using Azure DevOps.
Assist in monitoring, logging, and performance tuning of data jobs and clusters.
Skills:
3+ years of experience in data engineering or analytics engineering roles.
Advanced SQL: Proficiency in advanced SQL techniques for data transformation, querying, and optimization
Azure Databricks (Spark, Delta Lake)
Microsoft Fabric (Dataflows, Pipelines, OneLake)
SQL and Python (Pandas, PySpark)