Argyll Infotech Enterprise Pvt Ltd

Databricks Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer in Maryland, offering a contract position with a focus on designing and optimizing data pipelines using Databricks and Apache Spark. Key skills include data ingestion, compliance, and data quality management.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
December 2, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Baltimore, MD
-
🧠 - Skills detailed
#Agile #ML (Machine Learning) #Scala #Data Layers #JDBC (Java Database Connectivity) #REST (Representational State Transfer) #Databricks #PeopleSoft #Data Integrity #BI (Business Intelligence) #AI (Artificial Intelligence) #Data Management #Anomaly Detection #Spark (Apache Spark) #Data Quality #Observability #Predictive Modeling #Monitoring #"ETL (Extract #Transform #Load)" #Metadata #Data Ingestion #Security #Compliance #Apache Spark #Delta Lake #Data Pipeline #Data Security #Automation #GDPR (General Data Protection Regulation) #Grafana
Role description
Job Role : Databricks Engineer Location : Maryland Client : University of Maryland Global Campus We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data for machine learning, AI/BI, and analytics at scale. You will play a critical role in engineering the infrastructure and workflows that enable seamless data flow across the enterprise, ensure operational excellence, and provide the backbone for strategic decision-making, predictive modeling, and innovation. Responsibilities • Data & AI Platform Engineering (Databricks-Centric): • Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles. • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers. • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation. • Apply schema evolution and data versioning to support agile data development. • Platform Integration & Data Ingestion: • Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks. • Implement connectors and ingestion frameworks that accommodate structured, semi structured, and unstructured data. • Design standardized data ingestion processes with automated error handling, retries, and alerting. • Data Quality, Monitoring, and Governance: • Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers. • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures. • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement. • Security, Privacy, and Compliance: • Enforce data security best practices including row-level security, encryption at rest/in transit, and fine-grained access control via Unity Catalog. • Design and implement data masking, tokenization, and anonymization for compliance with privacy regulations (e.g., GDPR, FERPA). • Work with security teams to audit and certify compliance controls.