Senior Data Engineer (Security & Controls) - W2 Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer (Security & Controls) contract position, remote (EST), with a pay rate of "unknown." Requires 10+ years of data engineering experience, 5+ years with Databricks, and expertise in Unity Catalog and security frameworks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Pipeline #GitLab #Python #Metadata #Azure #Terraform #Monitoring #DevOps #GitHub #Compliance #Spark (Apache Spark) #Data Engineering #Cloud #Data Design #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Scala #Data Ingestion #GIT #Documentation #Classification #Delta Lake #Automation #Logging #Data Management #Data Governance #Deployment #Security #Azure DevOps #Databricks #Data Lineage
Role description
Job Title: Senior Data Engineer (Security & Controls) Position Type: Contract Location: Remote (EST time zone) Work Authorization: W2 Independent or H1B transfer only (No C2C, No Visa Sponsorship) Note: Candidates must be authorized to work in the U.S. without sponsorship. C2C applicants will not be considered. You will join the Platform Innovation Team at a Fortune 100 company, focused on enabling internal engineering teams with robust, secure, and scalable data platforms. Your key responsibilities will include: β€’ Designing and developing reusable frameworks, templates, and standardized notebooks in Databricks for enterprise-wide adoption β€’ Implementing Unity Catalog best practices for data governance, access control, auditing, and metadata management β€’ Enforcing security controls and SOX compliance across data platforms using fine-grained permissions and monitoring β€’ Collaborating with DevOps and platform teams to integrate Databricks workflows into CI/CD pipelines β€’ Building and maintaining Lakeflow Connect integrations for automated data ingestion β€’ Creating documentation, playbooks, and training materials to onboard engineers β€’ Driving consistency in data engineering practices across multiple business units Must-have qualifications: β€’ 10+ years of professional experience as a Data Engineer – Proven track record designing, building, and maintaining scalable data platforms. β€’ 5+ years of hands-on experience with Databricks – Deep expertise in Databricks workspace, clusters, job automation, Delta Lake, and Spark optimization. β€’ 2+ years of direct experience with Unity Catalog – Implemented data governance policies, managed metastores, and configured data lineage and audit logging. β€’ 2+ years working with Python notebooks in Databricks – Developed reusable, production-grade notebooks for ETL/ELT, data validation, and automation. β€’ Strong understanding of security and compliance frameworks – Specifically role-based access control (RBAC), SOX compliance, data classification, and secure data sharing practices. β€’ Hands-on experience with Lakeflow Connect – Configured and managed data ingestion pipelines from cloud sources into Databricks. β€’ DevOps integration experience – Worked with Git-based repositories (e.g., GitHub, GitLab, Azure DevOps) using branching strategies (feature branches, PRs, CI/CD) to manage Databricks code deployments. Preferred Skills: β€’ Experience in Fortune 100 or large enterprise environments β€’ Familiarity with data mesh architecture or domain-driven data design β€’ Exposure to infrastructure-as-code (IaC) tools (Terraform, ARM, etc.) for Databricks deployment β€’ Knowledge of monitoring and alerting frameworks for data pipelines β€’ Experience mentoring or enabling engineering teams through enablement frameworks