

Log Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Log Data Engineer with a contract length of "X months" and a pay rate of "$X per hour". Remote work is allowed. Key skills include AWS, data pipeline orchestration, and programming in Java, JavaScript, and Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 4, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Baltimore, MD
-
π§ - Skills detailed
#JavaScript #Logging #Documentation #Ansible #Deployment #Java #Data Lake #S3 (Amazon Simple Storage Service) #Scala #Terraform #Prometheus #Lambda (AWS Lambda) #Data Extraction #Grafana #Data Engineering #SQL (Structured Query Language) #Requirements Gathering #IAM (Identity and Access Management) #Storage #Batch #Unix #AWS (Amazon Web Services) #SQL Queries #EC2 #Security #Disaster Recovery #Data Management #GitLab #Linux #Monitoring #Scripting #Programming #Infrastructure as Code (IaC) #Version Control #DataOps #Data Processing #Datasets #Python #Data Pipeline #Data Ingestion #"ETL (Extract #Transform #Load)"
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Overview Of Role
This role is a mix of infrastructure and data engineering (DataOps). We are therefore looking for experience in both infrastructure and data management for the ideal candidates. They should also have autonomy and be able to prioritize their work with minimal oversight.
Responsibilities
β’ Develop data processing pipelines using programming languages like Java, Javascript and Python on Unix server environments to extract, transform, and load log data
β’ Implement scalable, fault-tolerant solutions for data ingestion, processing, and storage.
β’ Support systems engineering lifecycle activities for data engineering deployments, including requirements gathering, design, testing, implementation, operations, and documentation.
β’ Automating platform management processes through Ansible or other scripting tools/languages
β’ Troubleshooting incidents impacting the log data platforms
β’ Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
β’ Develop training and documentation materials
β’ Support log data platform upgrades including coordinating testing of upgrades with users of the platform
β’ Gather and process raw data from multiple disparate sources (including writing scripts, calling APIs, writing SQL queries, etc.) into a form suitable for analysis
β’ Enable log data, batch and real-time analytical processing solutions leveraging emerging technologies
β’ Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems
Experience
General
β’ Ability to troubleshoot and diagnose complex issues
β’ Able to demonstrate experience supporting technical users and conduct requirements analysis
β’ Can work independently with minimal guidance & oversight
β’ Experience with IT Service Management and familiarity with Incident & Problem management
β’ Highly skilled in identifying performance bottlenecks, identifying anomalous system behavior, and resolving root cause of service issues.
β’ Demonstrated ability to effectively work across teams and functions to influence design, testing, operations, and deployment of highly available software.
β’ Knowledge of standard methodologies related to security, performance, and disaster recovery
Required Technical Expertise
β’ Expertise in AWS and implementation of CICD pipelines supporting log ingestion.
β’ Expertise with AWS computing environments such as ECS, EKS, EC2 and Lambda
β’ 3-5 yearsβ Experience in designing, developing, and deploying data lakes using AWS native services (S3, Firehose, IAM, Terraform)
β’ Experience with data pipeline orchestration platforms
β’ Expertise in Ansible/Terraform scripts and Infrastructure as Code scripting is required
β’ Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments (e.g. Gitlab)
β’ Proficiency in distributed Linux environments
β’ Proficiency in implementing monitoring, logging, and alerting solutions for data infrastructure (e.g., Prometheus, Grafana)
β’ Experience writing data pipelines to ingest log data from a variety of sources and platforms.
β’ Implementation knowledge in data processing pipelines using programming languages like Java, Javascript and Python to extract, transform, and load (ETL) data
β’ Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets
β’ Troubleshoot and resolve issues related to data processing, storage, and retrieval.
β’ Experience in development of systems for data extraction, ingestion and processing of large volumes of data
#Dice