STAFFXPERT LLC

Data Engineer Level 3

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer Level 3 in Washington, DC, requiring expertise in Azure Data Factory, Databricks, DBT, and Python. The contract length is unspecified, with a competitive pay rate. Candidates should have experience in cloud-based data ingestion and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Washington DC-Baltimore Area
-
🧠 - Skills detailed
#Metadata #Data Integration #Data Governance #Data Pipeline #Databricks #Cloud #Scala #"ETL (Extract #Transform #Load)" #Data Modeling #dbt (data build tool) #Data Architecture #Azure Data Factory #Programming #Azure #Data Management #AI (Artificial Intelligence) #ADF (Azure Data Factory) #Data Lake #Data Engineering #Python #Data Ingestion
Role description
Data Engineer Level 3 Onsite – Washington, DC (Preferred candidates from Maryland, DC, Virginia) Job Summary: STAFFXPERT LLC is seeking a Data Engineer Level 3 on behalf of our client in Washington, DC. The ideal candidate will design, develop, and optimize data pipelines, manage metadata, and support data ingestion from cloud and lake sources. This role requires strong expertise in Azure Data Factory, Databricks, DBT, and Python, with a focus on building scalable and efficient data workflows. Key Responsibilities: • Develop, maintain, and optimize data pipelines for document ingestion and metadata capture. • Orchestrate workflows using Azure Data Factory (ADF). • Utilize Databricks for unit cataloging and metadata management in data lakes. • Implement transformations and models using DBT. • Write efficient, maintainable, and scalable Python code. • Collaborate with cross-functional teams to ensure high-quality data integration and governance. • Support best practices in data pipeline architecture, performance tuning, and workflow optimization. Required Qualifications: • Proven experience with Azure Data Factory (ADF) for workflow orchestration. • Strong experience with Databricks and managing metadata/document workflows in data lakes. • Hands-on experience with DBT for data modeling and transformation. • Advanced Python programming skills. • Experience with cloud-based data ingestion, preferably Google Vertex AI. • Solid understanding of data architecture, pipeline design, and data governance principles. Preferred Qualifications: • Experience with enterprise-scale data integration projects. • Familiarity with the M365 ecosystem or similar cloud platforms. • Strong analytical and problem-solving skills.