

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "$/hour". Key skills required include Python, SQL, AWS (S3, Redshift, Athena, Glue Jobs), DBT, Informatica, and PySpark. Experience in the Insurance domain is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 16, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
California, United States
-
π§ - Skills detailed
#Informatica #Snowflake #Scala #Data Modeling #Redshift #S3 (Amazon Simple Storage Service) #Data Processing #dbt (data build tool) #Spark (Apache Spark) #Spark SQL #SQL (Structured Query Language) #Data Engineering #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Athena #PySpark #Data Pipeline #Python
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
β’ Design, develop, and optimize scalable data pipelines and workflows.
β’ Work with structured and semi-structured data using Python, PySpark, SQL, and AWS services (Athena, Redshift, Glue Jobs, S3).
β’ Integrate and transform data for reporting and analytics using DBT (Data Build Tool).
β’ Develop ETL/ELT pipelines leveraging Informatica and Snowflake for high-performance data processing.
β’ Collaborate with stakeholders from the Insurance domain to understand data needs and translate them into technical solutions.
β’ Strong experience in Python and SQL for data transformation.
β’ Hands-on experience with AWS:
β’ S3, Redshift, Athena, Glue Jobs
β’ Experience with DBT (Data Build Tool) for data transformation and modeling.
β’ PySpark development for large-scale data processing.
β’ Informatica for enterprise-level ETL workflows.
β’ Working knowledge of Snowflake: data modeling, SQL optimization, and integration.