

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires expertise in AWS Services, Data Lakehouse, SQL Server, and IAM, with a focus on cloud-based data workflows. Remote work is available.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
June 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Monitoring #SQL (Structured Query Language) #SQL Server #PostgreSQL #Data Lake #Scala #Data Architecture #AWS (Amazon Web Services) #Data Lakehouse #IAM (Identity and Access Management) #Aurora #Data Ingestion #Terraform #Cloud #Vault #Logging #Automation #"ETL (Extract #Transform #Load)" #Data Engineering #RDS (Amazon Relational Database Service) #Aurora PostgreSQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Primary Skills: AWS Services, Data Lakehouse, Lake Formation, SQL Server, IAM
Secondary Skills: EKS, HashiCorp Vault, Terraform, Keycloak
Location: Remote (USA)
Job Description:
We are seeking a highly skilled Data Engineer to support the automation and transformation of data workflows in a cloud-based Data Lakehouse environment. This role focuses on designing, implementing, and maintaining secure, scalable, and efficient data infrastructure using modern AWS services and tools.
Key Responsibilities:
β’ Automate the creation, transformation, and integration of raw and enriched data models in the Data Lakehouse.
β’ Implement automated ETL pipelines for seamless data ingestion, transformation, and consumption.
β’ Design and implement LakeFormation integration across AWS Data Platform accounts.
β’ Integrate LakeFormation with Keycloak for authentication and access control.
β’ Build an integrated IAM solution, including EKS service accounts for secure identity and access management.
β’ Design and integrate HashiCorp Vault with AWS services (e.g., Secrets Manager, Certificate Manager, Keycloak).
β’ Develop secure integrations between HashiCorp Vault and AWS Data Services such as RDS Aurora PostgreSQL, RDS MSSQL, and other RDS solutions.
β’ Provision MSSQL servers on AWS using Terraform and deploy read replicas.
β’ Implement monitoring and logging solutions for MSSQL servers on AWS.
β’ Transition existing pipelines to the new MSSQL server infrastructure.
β’ Collaborate with business application owners to review existing data architecture, pipelines, and consumption patterns.
β’ Design and document target architecture for pipelines, processing, and analytics.
β’ Identify opportunities for optimization, consolidation, and improved performance.
β’ Work closely with the data team to decompose business logic and support data transformation initiatives.