

Senior Data Engineer (Security & Controls) - W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer (Security & Controls) contract position, remote (EST), with a pay rate of "unknown." Requires 10+ years of data engineering experience, 5+ years with Databricks, and expertise in Unity Catalog and security frameworks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 5, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Pipeline #GitLab #Python #Metadata #Azure #Terraform #Monitoring #DevOps #GitHub #Compliance #Spark (Apache Spark) #Data Engineering #Cloud #Data Design #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Scala #Data Ingestion #GIT #Documentation #Classification #Delta Lake #Automation #Logging #Data Management #Data Governance #Deployment #Security #Azure DevOps #Databricks #Data Lineage
Role description
Job Title: Senior Data Engineer (Security & Controls)
Position Type: Contract
Location: Remote (EST time zone)
Work Authorization: W2 Independent or H1B transfer only (No C2C, No Visa Sponsorship)
Note: Candidates must be authorized to work in the U.S. without sponsorship. C2C applicants will not be considered.
You will join the Platform Innovation Team at a Fortune 100 company, focused on enabling internal engineering teams with robust, secure, and scalable data platforms.
Your key responsibilities will include:
β’ Designing and developing reusable frameworks, templates, and standardized notebooks in Databricks for enterprise-wide adoption
β’ Implementing Unity Catalog best practices for data governance, access control, auditing, and metadata management
β’ Enforcing security controls and SOX compliance across data platforms using fine-grained permissions and monitoring
β’ Collaborating with DevOps and platform teams to integrate Databricks workflows into CI/CD pipelines
β’ Building and maintaining Lakeflow Connect integrations for automated data ingestion
β’ Creating documentation, playbooks, and training materials to onboard engineers
β’ Driving consistency in data engineering practices across multiple business units
Must-have qualifications:
β’ 10+ years of professional experience as a Data Engineer β Proven track record designing, building, and maintaining scalable data platforms.
β’ 5+ years of hands-on experience with Databricks β Deep expertise in Databricks workspace, clusters, job automation, Delta Lake, and Spark optimization.
β’ 2+ years of direct experience with Unity Catalog β Implemented data governance policies, managed metastores, and configured data lineage and audit logging.
β’ 2+ years working with Python notebooks in Databricks β Developed reusable, production-grade notebooks for ETL/ELT, data validation, and automation.
β’ Strong understanding of security and compliance frameworks β Specifically role-based access control (RBAC), SOX compliance, data classification, and secure data sharing practices.
β’ Hands-on experience with Lakeflow Connect β Configured and managed data ingestion pipelines from cloud sources into Databricks.
β’ DevOps integration experience β Worked with Git-based repositories (e.g., GitHub, GitLab, Azure DevOps) using branching strategies (feature branches, PRs, CI/CD) to manage Databricks code deployments.
Preferred Skills:
β’ Experience in Fortune 100 or large enterprise environments
β’ Familiarity with data mesh architecture or domain-driven data design
β’ Exposure to infrastructure-as-code (IaC) tools (Terraform, ARM, etc.) for Databricks deployment
β’ Knowledge of monitoring and alerting frameworks for data pipelines
β’ Experience mentoring or enabling engineering teams through enablement frameworks
Job Title: Senior Data Engineer (Security & Controls)
Position Type: Contract
Location: Remote (EST time zone)
Work Authorization: W2 Independent or H1B transfer only (No C2C, No Visa Sponsorship)
Note: Candidates must be authorized to work in the U.S. without sponsorship. C2C applicants will not be considered.
You will join the Platform Innovation Team at a Fortune 100 company, focused on enabling internal engineering teams with robust, secure, and scalable data platforms.
Your key responsibilities will include:
β’ Designing and developing reusable frameworks, templates, and standardized notebooks in Databricks for enterprise-wide adoption
β’ Implementing Unity Catalog best practices for data governance, access control, auditing, and metadata management
β’ Enforcing security controls and SOX compliance across data platforms using fine-grained permissions and monitoring
β’ Collaborating with DevOps and platform teams to integrate Databricks workflows into CI/CD pipelines
β’ Building and maintaining Lakeflow Connect integrations for automated data ingestion
β’ Creating documentation, playbooks, and training materials to onboard engineers
β’ Driving consistency in data engineering practices across multiple business units
Must-have qualifications:
β’ 10+ years of professional experience as a Data Engineer β Proven track record designing, building, and maintaining scalable data platforms.
β’ 5+ years of hands-on experience with Databricks β Deep expertise in Databricks workspace, clusters, job automation, Delta Lake, and Spark optimization.
β’ 2+ years of direct experience with Unity Catalog β Implemented data governance policies, managed metastores, and configured data lineage and audit logging.
β’ 2+ years working with Python notebooks in Databricks β Developed reusable, production-grade notebooks for ETL/ELT, data validation, and automation.
β’ Strong understanding of security and compliance frameworks β Specifically role-based access control (RBAC), SOX compliance, data classification, and secure data sharing practices.
β’ Hands-on experience with Lakeflow Connect β Configured and managed data ingestion pipelines from cloud sources into Databricks.
β’ DevOps integration experience β Worked with Git-based repositories (e.g., GitHub, GitLab, Azure DevOps) using branching strategies (feature branches, PRs, CI/CD) to manage Databricks code deployments.
Preferred Skills:
β’ Experience in Fortune 100 or large enterprise environments
β’ Familiarity with data mesh architecture or domain-driven data design
β’ Exposure to infrastructure-as-code (IaC) tools (Terraform, ARM, etc.) for Databricks deployment
β’ Knowledge of monitoring and alerting frameworks for data pipelines
β’ Experience mentoring or enabling engineering teams through enablement frameworks