

Oreva Technologies, Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of more than 6 months, offering a pay rate of "unknown." It requires AWS, Python, SQL skills, and experience in data integration and modeling. Work is hybrid in Ridgefield, CT.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 12, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Ridgefield, CT
-
π§ - Skills detailed
#Big Data #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Lambda (AWS Lambda) #Data Engineering #Databases #Airflow #AWS (Amazon Web Services) #Data Documentation #dbt (data build tool) #Snowflake #Agile #Data Management #AWS S3 (Amazon Simple Storage Service) #Storage #Data Processing #Metadata #Spark (Apache Spark) #Data Science #Data Lake #Athena #Security #MySQL #MongoDB #PostgreSQL #Data Quality #Data Pipeline #Cloud #Data Architecture #Data Warehouse #Data Storage #DevOps #Redshift #Scala #Data Governance #Data Integration #Python #Hadoop #S3 (Amazon Simple Storage Service) #AWS Glue #ADF (Azure Data Factory) #NoSQL #Data Modeling #Documentation
Role description
Job Title: Data Engineer (Hybrid β Ridgefield, CT) - 1760
Location: on-site 2-3X per week in Ridgefield CT location
Relocation Offered: Considered
Employment Type: Full-time employment
2 positions open.
Must have AWS, Python, SQL
Excellent communication
Must be able to commute to Ridgefield, CT 2-3 days a week on site. If they relocate they need to have a good reason like family, etc.
Interview process:
1. with recruiter
1. technical call
1. Onsite with about 4-5 people, separate interviews
Overview
We are looking for a hands-on Data Engineer to design, build, and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture, optimizing data flow, and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact, growth, balance, and well-being.
What Youβll Do
β’ Design, build, and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
β’ Partner with architects and engineering teams to define and evolve our cloud-based data architecture, including data lakes, data warehouses, and streaming data platforms.
β’ Work closely with data scientists, analysts, and business partners to understand requirements and deliver reliable, reusable data solutions.
β’ Develop and maintain scalable data storage solutions (e.g., AWS S3, Redshift, Snowflake) with a focus on performance, reliability, and security.
β’ Implement data quality checks, validation processes, and metadata documentation.
β’ Monitor, troubleshoot, and improve pipeline performance and workflow efficiency.
β’ Stay current on industry trends and recommend new technologies and approaches.
Qualifications
Data Engineer (Mid-Level)
β’ Strong understanding of data integration, data modeling, and SDLC.
β’ Experience working on project teams and delivering within Agile environments.
β’ Hands-on experience with AWS data services (e.g., Glue, Lambda, Athena, Step Functions, Lake Formation).
β’ Associate degree + 8 yearsβ experience, or Bachelor's + 4 years, or Masterβs + 2 years. Or Associate degree + 4 yearsβ experience, or Bachelor's + 2 years, or Master's + 1 year experience.
β’ Expert-level proficiency in at least one major cloud platform (AWS preferred).
β’ Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
β’ Experience with big data processing (e.g., Spark, Hadoop, Flink) is a plus.
β’ Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
β’ Familiarity with CI/CD pipelines and DevOps principles.
β’ Proficiency in Python and SQL (required).
Desired Skills
β’ Experience with ETL/ELT tools (e.g., Airflow, dbt, AWS Glue, ADF).
β’ Understanding of data governance and metadata management.
β’ Experience with Snowflake.
β’ AWS certification is a plus.
β’ Strong problem-solving skills and ability to troubleshoot pipeline performance issues.
Job Title: Data Engineer (Hybrid β Ridgefield, CT) - 1760
Location: on-site 2-3X per week in Ridgefield CT location
Relocation Offered: Considered
Employment Type: Full-time employment
2 positions open.
Must have AWS, Python, SQL
Excellent communication
Must be able to commute to Ridgefield, CT 2-3 days a week on site. If they relocate they need to have a good reason like family, etc.
Interview process:
1. with recruiter
1. technical call
1. Onsite with about 4-5 people, separate interviews
Overview
We are looking for a hands-on Data Engineer to design, build, and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture, optimizing data flow, and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact, growth, balance, and well-being.
What Youβll Do
β’ Design, build, and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
β’ Partner with architects and engineering teams to define and evolve our cloud-based data architecture, including data lakes, data warehouses, and streaming data platforms.
β’ Work closely with data scientists, analysts, and business partners to understand requirements and deliver reliable, reusable data solutions.
β’ Develop and maintain scalable data storage solutions (e.g., AWS S3, Redshift, Snowflake) with a focus on performance, reliability, and security.
β’ Implement data quality checks, validation processes, and metadata documentation.
β’ Monitor, troubleshoot, and improve pipeline performance and workflow efficiency.
β’ Stay current on industry trends and recommend new technologies and approaches.
Qualifications
Data Engineer (Mid-Level)
β’ Strong understanding of data integration, data modeling, and SDLC.
β’ Experience working on project teams and delivering within Agile environments.
β’ Hands-on experience with AWS data services (e.g., Glue, Lambda, Athena, Step Functions, Lake Formation).
β’ Associate degree + 8 yearsβ experience, or Bachelor's + 4 years, or Masterβs + 2 years. Or Associate degree + 4 yearsβ experience, or Bachelor's + 2 years, or Master's + 1 year experience.
β’ Expert-level proficiency in at least one major cloud platform (AWS preferred).
β’ Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
β’ Experience with big data processing (e.g., Spark, Hadoop, Flink) is a plus.
β’ Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
β’ Familiarity with CI/CD pipelines and DevOps principles.
β’ Proficiency in Python and SQL (required).
Desired Skills
β’ Experience with ETL/ELT tools (e.g., Airflow, dbt, AWS Glue, ADF).
β’ Understanding of data governance and metadata management.
β’ Experience with Snowflake.
β’ AWS certification is a plus.
β’ Strong problem-solving skills and ability to troubleshoot pipeline performance issues.





