

Yochana
Cybersecurity Data Pipeline Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cybersecurity Data Pipeline Engineer on a contract basis, offering a remote work location. Requires 10+ years in Cybersecurity, 5+ years with CRIBL/Vector/Datadog/Splunk, and proficiency in JavaScript or Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 27, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Kansas, United States
-
π§ - Skills detailed
#Data Pipeline #Groovy #Scripting #Anomaly Detection #Snowflake #Cybersecurity #Data Integration #JSON (JavaScript Object Notation) #JavaScript #Datadog #Apache NiFi #Monitoring #Strategy #Normalization #Kafka (Apache Kafka) #Data Governance #Scala #Data Transformations #Security #Observability #Libraries #Logging #Metadata #Documentation #Python #Storage #"ETL (Extract #Transform #Load)" #NiFi (Apache NiFi) #Compliance #Splunk #XML (eXtensible Markup Language)
Role description
Job Title : Cybersecurity Data Pipeline Engineer
Location : Remote
Duration : Contract
Job Description:
1.
β’ 10+ Years of experience working in Cybersecurity
β’ 5+ Years of experience on CRIBL/Vector/Datadog/Splunk or other data pipeline platforms
β’ 5+ Years of experience on JavaScript, python, or other scripting language
β’ Lead the architecture, design, and implementation of scalable, modular, and reusable data flow pipelines using Cribl, Apache NiFi, Vector, and other open-source platforms, ensuring consistent ingestion strategies across a complex, multi-source telemetry environment.
β’ Develop platform-agnostic ingestion frameworks and template-driven architectures to enable reusable ingestion patterns, supporting a variety of input types (e.g., syslog, Kafka, HTTP, Event Hubs, Blob Storage) and output destinations (e.g., Snowflake, Splunk, ADX, Log Analytics, Anvilogic).
β’ Spearhead the creation and adoption of a schema normalization strategy, leveraging the Open Cybersecurity Schema Framework (OCSF), including field mapping, transformation templates, and schema validation logicβdesigned to be portable across ingestion platforms.
β’ Design and implement custom data transformations and enrichments using scripting languages such as Groovy, Python, or JavaScript, while enforcing robust governance and security controls (SSL/TLS, client authentication, input validation, logging).
β’ Ensure full end-to-end traceability and lineage of data across the ingestion, transformation, and storage lifecycle, including metadata tagging, correlation IDs, and change tracking for forensic and audit readiness.
β’ Collaborate with observability and platform teams to integrate pipeline-level health monitoring, transformation failure logging, and anomaly detection mechanisms.
β’ Oversee and validate data integration efforts, ensuring high-fidelity delivery into downstream analytics platforms and data stores, with minimal data loss, duplication, or transformation drift.
β’ Lead technical working sessions to evaluate and recommend best-fit technologies, tools, and practices for managing structured and unstructured security telemetry data at scale.
β’ Implement data transformation logic including filtering, enrichment, dynamic routing, and format conversions (e.g., JSON β CSV, XML, Logfmt) to prepare data for downstream analytics platforms. (100 plus sources of data)
β’ Contribute to and maintain a centralized documentation repository, including ingestion patterns, transformation libraries, naming standards, schema definitions, data governance procedures, and platform-specific integration details.
β’ Coordinate with security, analytics, and platform teams to understand use cases and ensure pipeline logic supports threat detection, compliance, and data analytics requirements.
Thanks & Regards
Rushinga Reddy
Yochana Solutions Inc
248-598-7513 (D) || rushi@yochana.com
248-876-4228(Fax)
Job Title : Cybersecurity Data Pipeline Engineer
Location : Remote
Duration : Contract
Job Description:
1.
β’ 10+ Years of experience working in Cybersecurity
β’ 5+ Years of experience on CRIBL/Vector/Datadog/Splunk or other data pipeline platforms
β’ 5+ Years of experience on JavaScript, python, or other scripting language
β’ Lead the architecture, design, and implementation of scalable, modular, and reusable data flow pipelines using Cribl, Apache NiFi, Vector, and other open-source platforms, ensuring consistent ingestion strategies across a complex, multi-source telemetry environment.
β’ Develop platform-agnostic ingestion frameworks and template-driven architectures to enable reusable ingestion patterns, supporting a variety of input types (e.g., syslog, Kafka, HTTP, Event Hubs, Blob Storage) and output destinations (e.g., Snowflake, Splunk, ADX, Log Analytics, Anvilogic).
β’ Spearhead the creation and adoption of a schema normalization strategy, leveraging the Open Cybersecurity Schema Framework (OCSF), including field mapping, transformation templates, and schema validation logicβdesigned to be portable across ingestion platforms.
β’ Design and implement custom data transformations and enrichments using scripting languages such as Groovy, Python, or JavaScript, while enforcing robust governance and security controls (SSL/TLS, client authentication, input validation, logging).
β’ Ensure full end-to-end traceability and lineage of data across the ingestion, transformation, and storage lifecycle, including metadata tagging, correlation IDs, and change tracking for forensic and audit readiness.
β’ Collaborate with observability and platform teams to integrate pipeline-level health monitoring, transformation failure logging, and anomaly detection mechanisms.
β’ Oversee and validate data integration efforts, ensuring high-fidelity delivery into downstream analytics platforms and data stores, with minimal data loss, duplication, or transformation drift.
β’ Lead technical working sessions to evaluate and recommend best-fit technologies, tools, and practices for managing structured and unstructured security telemetry data at scale.
β’ Implement data transformation logic including filtering, enrichment, dynamic routing, and format conversions (e.g., JSON β CSV, XML, Logfmt) to prepare data for downstream analytics platforms. (100 plus sources of data)
β’ Contribute to and maintain a centralized documentation repository, including ingestion patterns, transformation libraries, naming standards, schema definitions, data governance procedures, and platform-specific integration details.
β’ Coordinate with security, analytics, and platform teams to understand use cases and ensure pipeline logic supports threat detection, compliance, and data analytics requirements.
Thanks & Regards
Rushinga Reddy
Yochana Solutions Inc
248-598-7513 (D) || rushi@yochana.com
248-876-4228(Fax)






