

Hyperconverged Infrastructure (HCI) and Hitachi Content Platform (HCP) Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Hyperconverged Infrastructure (HCI) and Hitachi Content Platform (HCP) Engineer for a 6-month contract, offering $80.00 - $100.00 per hour. Requires 3–7+ years in data engineering, proficiency in ETL, HCI, HCP, Lucene, and scripting. Remote work.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date discovered
July 25, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#AWS S3 (Amazon Simple Storage Service) #Data Engineering #Automation #Storage #Data Pipeline #Cloud #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Azure Blob Storage #Databases #NoSQL #Version Control #Documentation #Spark (Apache Spark) #Computer Science #Scripting #Data Processing #Scala #S3 (Amazon Simple Storage Service) #Monitoring #Compliance #Hadoop #SQL (Structured Query Language) #Big Data #"ETL (Extract #Transform #Load)" #Data Extraction #Data Integrity #Azure #GIT #Prometheus #Security #Bash
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Overview
Skilled Storage/Data Engineer to join our team for a large-scale archiving project. The role focuses on designing, implementing, and managing data workflows using Hyperconverged Infrastructure (HCI), Hitachi Content Platform (HCP), and related technologies. The ideal candidate will have expertise in ETL processes, Lucene query syntax, HCP administration, and PowerShell/Bash scripting to ensure efficient and secure data archiving.
Key Responsibilities
Data Engineering: Design and implement Extract, Transform, Load (ETL) processes to manage large volumes of data for archiving.
HCI Workflow Management: Configure and optimize Hyperconverged Infrastructure workflows, including data connectors, pipelines, indexes, and stages, to ensure scalability and performance.
Hitachi Content Platform (HCP): Administer, maintain, and optimize HCP systems, ensuring data integrity, availability, and compliance with archiving requirements.
Lucene Query Language: Write and optimize Lucene queries for efficient data search and retrieval within HCI environments.
Scripting and Automation: Develop and maintain PowerShell and/or Bash scripts to automate data processing, system administration, and workflow tasks.
Collaboration: Work with cross-functional teams, including system administrators and storage engineers, to design and implement robust archiving solutions.
Documentation: Document processes, configurations, and scripts for operational continuity and knowledge sharing.
Required Skills and Qualifications
ETL Expertise: Strong understanding of Extract, Transform, Load processes, with experience in building and optimizing data pipelines.
Hyperconverged Infrastructure (HCI): Knowledge of HCI workflow components (data connectors, pipelines, indexes, stages) and experience optimizing workflows for performance.
Hitachi Content Platform (HCP): Proficiency in HCP architecture, administration, and maintenance, including security and compliance features.
Lucene: Proficiency in Lucene query language syntax for search and retrieval in HCI environments.
Scripting: Hands-on experience with PowerShell and/or Bash scripting for automation and system administration.
Problem-Solving: Strong analytical skills to troubleshoot complex issues in data pipelines, storage systems, or workflows.
Collaboration and Communication: Ability to work effectively in a team and document technical processes clearly.
Experience: 3–7+ years in data engineering, storage systems, or infrastructure management.
Preferred Skills
Familiarity with cloud-based object storage systems (e.g., AWS S3, Azure Blob Storage).
Experience with big data technologies (e.g., Hadoop, Spark, Kafka).
Knowledge of SQL/NoSQL databases for data extraction and transformation.
Familiarity with version control systems (e.g., Git) and monitoring tools (e.g., Prometheus, Nagios).
Certifications in Hitachi Vantara, data engineering, or scripting (e.g., AWS Certified Data Analytics, Microsoft PowerShell).
Education
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Job Type: Contract
Pay: $80.00 - $100.00 per hour
Expected hours: 40 per week
Work Location: Remote