Eliassen Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 6-month contract in Durham, NC, offering $60.00 to $65.00/hr. Key skills include AWS, Snowflake, Python, SQL, and ETL development. A Bachelor's degree and AWS certification are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Durham, NC
-
🧠 - Skills detailed
#Data Quality #Data Lake #Agile #ADaM (Analysis Data Model) #BI (Business Intelligence) #Jenkins #Docker #Ansible #Azure #Snowflake #Kanban #Computer Science #Databases #Data Mart #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Profiling #Datasets #SQL (Structured Query Language) #Deployment #Data Modeling #Oracle #DevOps #Python #AWS (Amazon Web Services) #Scrum #Data Extraction #Data Analysis #Informatica #Maven #Cloud
Role description
Description Hybrid Every other week onsite/5 days in Durham, NC Our client seeks a Data Engineer to build and scale data solutions on AWS with Snowflake. The role focuses on data analysis, data modeling, and ETL development using Python and SQL to support an enterprise data lake and downstream data marts. You will collaborate in an Agile team to modernize on‑premises pipelines into cloud‑native architectures and enable reliable customer data platforms. Due to client requirements, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance. Rate: $60.00 to $65.00/hr. w2 Job Number #: JN -032026-105976 Responsibilities • Design, build, and optimize data pipelines and ETL workflows for a Snowflake data lake on AWS. • Analyze source systems, perform data profiling, and develop robust data models and data marts. • Develop Python and SQL solutions to migrate and transform on‑premises datasets into cloud‑native architectures. • Collaborate within an Agile team to deliver iterative data capabilities and ensure data quality and reliability. • Contribute to dashboarding and business intelligence enablement as needed. • Support DevOps practices for CI/CD and containerized deployments where applicable. Experience Requirements • Extensive experience with relational databases such as Oracle or Snowflake. • Proficiency in Python for data engineering and ETL development. • Strong SQL skills for data extraction, transformation, and optimization. • Experience in data warehousing, dimensional data modeling, and creation of data marts. • Hands‑on experience building data applications in a major cloud platform, preferably AWS. • Experience with ETL technologies such as Informatica or SnapLogic. • Experience with business intelligence and dashboards (preferred). • Experience with DevOps, CI/CD, and related tooling such as Maven, Jenkins, Stash, Ansible, or Docker (preferred). • Experience with Agile methodologies such as Kanban or Scrum (preferred). • Effective verbal and written communication skills. Education Requirements • Bachelor’s or Master’s degree in a technology‑related field such as Engineering or Computer Science. • AWS certification (preferred). • Azure certification (preferred). • Google Cloud certification (preferred).