

Charter Global
HVR Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an HVR Data Engineer on a contract basis, requiring 5+ years of HVR experience, proficiency in Kubernetes, Linux, AWS, and Azure. The position is remote with initial office travel, focusing on data pipeline design and data warehousing with Snowflake.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 5, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Santa Clara County, CA
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Datasets #Snowflake #Databricks #Replication #Data Pipeline #Azure #Data Engineering #Data Integrity #Kubernetes #Storage #Security #Data Replication #Data Science #Linux
Role description
Job Title: HVR Data Engineer
Location: Remote/WFH(First week travel to office)
Duration: Contract
Contract description:
β’ Design, develop, and maintain data pipelines using High Volume Replicator (HVR) for real-time data replication and integration.
β’ Collaborate with data scientists and analysts to understand data requirements and deliver high-quality data solutions.
β’ Optimize and monitor data workflows in Databricks to ensure efficient processing and storage of large datasets.
β’ Implement and manage data warehousing solutions using Snowflake, ensuring data integrity and security.
β’ Troubleshoot and resolve data-related issues, providing timely support to stakeholders.
Qualifications:
β’ Minimum of 5 years of experience working with High Volume Replicator (HVR) for data replication and integration.
β’ Proficiency in Kubernetes
β’ Strong knowledge of Linux
β’ Strong knowledge of AWS and Azure
β’ Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.
Job Title: HVR Data Engineer
Location: Remote/WFH(First week travel to office)
Duration: Contract
Contract description:
β’ Design, develop, and maintain data pipelines using High Volume Replicator (HVR) for real-time data replication and integration.
β’ Collaborate with data scientists and analysts to understand data requirements and deliver high-quality data solutions.
β’ Optimize and monitor data workflows in Databricks to ensure efficient processing and storage of large datasets.
β’ Implement and manage data warehousing solutions using Snowflake, ensuring data integrity and security.
β’ Troubleshoot and resolve data-related issues, providing timely support to stakeholders.
Qualifications:
β’ Minimum of 5 years of experience working with High Volume Replicator (HVR) for data replication and integration.
β’ Proficiency in Kubernetes
β’ Strong knowledge of Linux
β’ Strong knowledge of AWS and Azure
β’ Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.





