

Eliassen Group
Full Stack Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Full Stack Data Engineer with a contract length of "unknown," paying $64.00 to $70.00/hr. Work is hybrid in Westlake, TX. Key skills include Java, SQL, AWS, ETL, and CI/CD automation. AWS certification preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
April 14, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Westlake, TX
-
π§ - Skills detailed
#Data Integration #API (Application Programming Interface) #Deployment #"ETL (Extract #Transform #Load)" #Databases #Jenkins #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Data Engineering #Cloud #Docker #Java #JUnit #SQL (Structured Query Language) #Observability #Debugging #Unit Testing #Automation #DevOps #Data Processing #Python #Spring Boot #Monitoring #Computer Science #Scala #Kubernetes
Role description
Description
Hybrid Onsite Every Other Week in Westlake, TX
This role focuses on designing, developing, and maintaining backend data services and RESTful APIs that power enterprise-scale platforms. The engineer will build secure, scalable services, enable data integration, and support cloud-based deployments using AWS. The position blends data engineering and API development with emphasis on performance tuning, CI/CD automation, reliability, and production support. The work advances data-driven capabilities such as personalization and analytics for our client.
Due to client requirements, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $64.00 to $70.00/hr. w2
Responsibilities
β’ Design, develop, and maintain RESTful APIs for enterprise platforms.
β’ Build and enhance backend services using Java frameworks such as Spring Boot.
β’ Develop, debug, and tune complex SQL and PL/SQL for data processing and integration.
β’ Design and support ETL pipelines, including Python-based data processing.
β’ Implement secure, scalable, and cloud-native solutions on AWS.
β’ Develop and maintain CI/CD pipelines and automation using tools such as Jenkins, Docker, and Kubernetes.
β’ Create and maintain unit and automated tests using frameworks such as JUnit and Cucumber.
β’ Monitor applications and troubleshoot issues across development, testing, and production environments.
β’ Perform performance tuning and contribute to system reliability and observability.
β’ Collaborate in a DevOps environment and support data-driven capabilities including personalization and analytics.
Experience Requirements
β’ Bachelorβs degree with 5+ years of experience, or Masterβs degree with 3+ years of experience in computer science or a related field.
β’ Strong experience designing and developing RESTful APIs.
β’ Hands-on backend service development with Java frameworks such as Spring Boot.
β’ Solid experience with SQL and relational databases.
β’ Experience developing, debugging, and tuning complex SQL and PL/SQL.
β’ Data engineering experience supporting ETL pipelines.
β’ Proficiency with Python for ETL and data processing.
β’ Experience with AWS cloud development, deployment, and application management.
β’ Familiarity with CI/CD pipelines and automation tools such as Jenkins, Docker, and Kubernetes.
β’ Strong experience with unit testing and test automation frameworks such as JUnit and Cucumber.
β’ Experience with validation, monitoring, and issue resolution across environments including production.
β’ Effective written and verbal communication skills.
β’ Ability to work effectively in fast-paced, ambiguous environments.
β’ Kafka (preferred).
β’ Experience in large federated data environments (preferred).
Education Requirements
β’ Bachelorβs degree in computer science or a related field, or Masterβs degree in computer science or a related field.
β’ AWS certification (preferred).
Description
Hybrid Onsite Every Other Week in Westlake, TX
This role focuses on designing, developing, and maintaining backend data services and RESTful APIs that power enterprise-scale platforms. The engineer will build secure, scalable services, enable data integration, and support cloud-based deployments using AWS. The position blends data engineering and API development with emphasis on performance tuning, CI/CD automation, reliability, and production support. The work advances data-driven capabilities such as personalization and analytics for our client.
Due to client requirements, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.
Rate: $64.00 to $70.00/hr. w2
Responsibilities
β’ Design, develop, and maintain RESTful APIs for enterprise platforms.
β’ Build and enhance backend services using Java frameworks such as Spring Boot.
β’ Develop, debug, and tune complex SQL and PL/SQL for data processing and integration.
β’ Design and support ETL pipelines, including Python-based data processing.
β’ Implement secure, scalable, and cloud-native solutions on AWS.
β’ Develop and maintain CI/CD pipelines and automation using tools such as Jenkins, Docker, and Kubernetes.
β’ Create and maintain unit and automated tests using frameworks such as JUnit and Cucumber.
β’ Monitor applications and troubleshoot issues across development, testing, and production environments.
β’ Perform performance tuning and contribute to system reliability and observability.
β’ Collaborate in a DevOps environment and support data-driven capabilities including personalization and analytics.
Experience Requirements
β’ Bachelorβs degree with 5+ years of experience, or Masterβs degree with 3+ years of experience in computer science or a related field.
β’ Strong experience designing and developing RESTful APIs.
β’ Hands-on backend service development with Java frameworks such as Spring Boot.
β’ Solid experience with SQL and relational databases.
β’ Experience developing, debugging, and tuning complex SQL and PL/SQL.
β’ Data engineering experience supporting ETL pipelines.
β’ Proficiency with Python for ETL and data processing.
β’ Experience with AWS cloud development, deployment, and application management.
β’ Familiarity with CI/CD pipelines and automation tools such as Jenkins, Docker, and Kubernetes.
β’ Strong experience with unit testing and test automation frameworks such as JUnit and Cucumber.
β’ Experience with validation, monitoring, and issue resolution across environments including production.
β’ Effective written and verbal communication skills.
β’ Ability to work effectively in fast-paced, ambiguous environments.
β’ Kafka (preferred).
β’ Experience in large federated data environments (preferred).
Education Requirements
β’ Bachelorβs degree in computer science or a related field, or Masterβs degree in computer science or a related field.
β’ AWS certification (preferred).






