

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5–10 years of experience, focusing on Python, SQL, and Snowflake. It offers a 7-month remote contract at a competitive pay rate, emphasizing ETL pipelines and data quality improvement.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 13, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Fixed Term
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Airflow #API (Application Programming Interface) #Data Access #Data Engineering #AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Linux #Unix #Apache Airflow #GitHub #Datasets #Python #PySpark #Data Quality #ChatGPT #Spark (Apache Spark) #Snowflake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
🚀 Principal Software Engineer – Data Engineering | 100% Remote
We’re on the hunt for a data engineering powerhouse who’s ready to take data to the next level.
This isn’t just about writing code, it’s about making data usable, reliable, and powerful across a global organization. You’ll build pipelines, clean datasets, move us from Postgres to Snowflake, and help drive data quality at scale.
📅 Contract Length: 7 Months with possible extension.
📍 Location: 100% Remote
💻 What You’ll Do:
• Craft & optimize Python scripts for moving and transforming data
• Migrate systems from Postgres to Snowflake
• Build and maintain ETL pipelines using Apache Airflow
• Tackle data quality issues head-on and improve reporting accuracy
• Work with APIs, advanced SQL, and (if you’re feeling fancy) PySpark
• Leverage AI tools like GitHub Copilot, Gemini, and ChatGPT to innovate faster
• Collaborate with business teams to make data accessible & actionable
🔑 What You Bring:
• 5–10 years in data engineering / integration
• Expert in Python + deep database experience
• Strong SQL and API integration skills
• Comfortable in Linux/Unix environments
• Experience with Snowflake is a big plus
• Clear communicator & independent problem solver
🌍 Why Join?
• Fully remote work from anywhere
• Global impact on how the client uses and trusts its data
• Blend of new development (90%) + problem-solving support (10%)
• One-and-done interview process
If you thrive at the intersection of data, development, and problem-solving, and you love making messy data clean, this role is calling your name.