InterEx Group

Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer in Atlanta, GA, offering a contract length of "unknown" and a pay rate of "unknown." Requires a Bachelor’s degree, 5+ years in Azure data services, SQL proficiency, and ETL pipeline experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Cloud #Scripting #Security #Azure cloud #Data Pipeline #Data Processing #Data Accuracy #Azure SQL Database #Azure SQL #"ETL (Extract #Transform #Load)" #Azure #Big Data #Data Modeling #Microsoft Azure #ADF (Azure Data Factory) #Python #Spark (Apache Spark) #Computer Science #Azure Databricks #Hadoop #SQL (Structured Query Language) #Compliance #Data Governance #Scala #Databricks #Azure Data Factory #Data Engineering #Data Quality #Data Lake
Role description
Microsoft Azure Data Engineer - Atlanta, GA Key Responsibilities: • Design and implement highly scalable, high-volume data pipelines and warehouses using Azure Data Factory, Azure Databricks, and other Azure services. • Develop and maintain data models, structures, and procedures to ensure data accuracy and accessibility. • Work closely with stakeholders to understand their data processing needs and build systems that provide meaningful insights. • Optimize data flows and architecture to improve system performance and data quality. • Ensure compliance with data governance and security policies. • Provide technical guidance and training to team members and stakeholders on Azure data services and best practices. The ideal candidate will have: • Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. • 5+ years of experience as a Data Engineer with a focus on Microsoft Azure cloud services. • Strong understanding of Azure data services (Azure SQL Database, Azure Data Lake, HDInsight, Stream Analytics). • Proficient in SQL and experience with scripting languages such as Python or Scala. • Experience with data modeling, data warehousing, and building ETL pipelines. • Familiarity with big data tools and frameworks (e.g., Hadoop, Spark) is a plus. • Excellent problem-solving skills and the ability to work independently or as part of a team. • Strong communication skills and the ability to work closely with both technical and non-technical staff. To apply please send your resume to this job posting.