

Lead Pyspark / SQL Engineer - Threat Detection & DataBricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead PySpark/SQL Engineer focused on Threat Detection & Databricks, offering a contract of "X months" at a pay rate of "$X per hour." Requires 5-10+ years of experience in data engineering, threat detection, and Terraform expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
1032
-
ποΈ - Date discovered
August 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco, CA
-
π§ - Skills detailed
#Security #Data Engineering #SQL (Structured Query Language) #Spark (Apache Spark) #Data Processing #"ETL (Extract #Transform #Load)" #Scala #Terraform #Spark SQL #Cloud #PySpark #Data Integrity #Azure #AWS (Amazon Web Services) #Migration #Datasets #Databricks
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Summary: We are seeking a skilled PySpark / SQL Engineer to support our Threat Detection team in building and migrating security analytics pipelines using Databricks. This role will focus on a platform migration project, moving detection rules and associated content from a legacy system into Databricks' native detection framework. You will be responsible for creating equivalent PySpark log pipelines, rule configuration files, unit tests, and data validation checks, and deploying these pipelines using Terraform. A strong background in data engineering, particularly with large-scale log analytics, is essential.
Requirements
β’ 5-10+ years of experience as a PySpark/SQL Engineer, with a strong focus on data engineering and analytics.
β’ Prior experience in building threat detection or log analytics pipelines using PySpark, SQL, and Databricks.
β’ Hands-on experience with Terraform for deploying data infrastructure.
β’ Proficient in PySpark for large-scale data processing and transformation.
β’ Familiarity with cloud platforms such as AWS or Azure is preferred.
β’ Strong analytical skills and attention to detail when working with complex datasets.
β’ Proven ability to work effectively in collaborative, cross-functional teams.
β’ Excellent verbal and written communication skills in English.
Responsibilities
β’ Design and build threat detection pipelines using PySpark, SQL, and Databricks.
β’ Support the migration of detection rules and content from a legacy platform to Databricks.
β’ Create and maintain PySpark log pipelines and associated rule configuration files.
β’ Write unit tests to ensure pipeline accuracy and stability.
β’ Perform data validation checks to ensure data integrity.
β’ Deploy pipelines and infrastructure using Terraform.
β’ Optimize existing data workflows and queries for performance and scalability.
β’ Collaborate with cross-functional teams to understand data requirements and ensure alignment with detection objectives.