

Oreva Technologies, Inc.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month contract, remote location preferred in Atlanta, Dallas, or Miami. Key skills include Python, PySpark, SQL, and Snowflake. Experience with ETL pipelines and CI/CD is required. Start date is ASAP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 20, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Datasets #Programming #Spark (Apache Spark) #SQL Queries #"ETL (Extract #Transform #Load)" #API (Application Programming Interface) #Data Processing #Data Pipeline #Data Engineering #Scala #Snowflake #Snowpark #SQL (Structured Query Language) #PySpark #DevOps #Python
Role description
Data Engineer
Location: Remote (Strong preference for candidates based in Atlanta, Dallas, or Miami)
Start Date: ASAP
Duration: 12 Months, with eligibility for full-time conversion at 6 months
What You’ll Do
• Partner with software engineers, business stakeholders, and subject matter experts to translate requirements into scalable data solutions.
• Develop, implement, and deploy ETL pipelines and workflows.
• Preprocess and analyze large datasets to uncover meaningful insights.
• Validate, refine, and optimize data models for performance and reliability.
• Monitor and maintain data pipelines in production, identifying improvements and refining workflows.
• Document development processes, workflows, and best practices to support team knowledge sharing.
Technical Skills
• Strong programming proficiency in Python, PySpark, and SQL.
• Ability to craft and optimize complex SQL queries and stored procedures.
• Experience developing and maintaining scalable, high-performing data models.
• Hands-on expertise with Snowflake, including Snowpark for data processing.
• Exposure to API integrations to support data workflows.
• Experience implementing CI/CD pipelines through DevOps platforms.
Data Engineer
Location: Remote (Strong preference for candidates based in Atlanta, Dallas, or Miami)
Start Date: ASAP
Duration: 12 Months, with eligibility for full-time conversion at 6 months
What You’ll Do
• Partner with software engineers, business stakeholders, and subject matter experts to translate requirements into scalable data solutions.
• Develop, implement, and deploy ETL pipelines and workflows.
• Preprocess and analyze large datasets to uncover meaningful insights.
• Validate, refine, and optimize data models for performance and reliability.
• Monitor and maintain data pipelines in production, identifying improvements and refining workflows.
• Document development processes, workflows, and best practices to support team knowledge sharing.
Technical Skills
• Strong programming proficiency in Python, PySpark, and SQL.
• Ability to craft and optimize complex SQL queries and stored procedures.
• Experience developing and maintaining scalable, high-performing data models.
• Hands-on expertise with Snowflake, including Snowpark for data processing.
• Exposure to API integrations to support data workflows.
• Experience implementing CI/CD pipelines through DevOps platforms.






