Agile Resources, Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5-8+ years of experience, focusing on Python, ETL, and Postgres. It offers an indefinite remote contract at up to $75/hour W-2 or $83/hour 1099, requiring U.S. citizenship or Green Card.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
November 8, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Strategy #Data Governance #Database Design #Data Lake #Data Quality #Python #Scala #Data Strategy #Integration Testing #JSON (JavaScript Object Notation) #Logstash #Linux #Automation #Apache Airflow #Documentation #Storage #Airflow #Data Integration #Programming #Security #BI (Business Intelligence) #Data Manipulation #Data Pipeline #Data Engineering #"ETL (Extract #Transform #Load)"
Role description
Location/Remote: 100% Remote (U.S. only; preference for EST/CST time zones; must work EST hours) Employment Type: Indefinite W-2/1099 contract (some people have worked here for 7+ years) Compensation: Up to $75/hour W-2 or $83/hour 1099 Work Authorization: U.S. Citizen or Green Card Holder only (due to work involving sensitive material) Benefits: Medical, dental, vision, LTD/STD, HSA/FSA, term life, and supplemental health insurances (e.g., Aflac) for all employees and their families, if desired; 401(k) Job Summary We are seeking a Data Engineer to design, implement, and support data pipelines, platforms, and integrations that power business analytics and operational systems. This role focuses on ensuring data availability, accessibility, and performance across multiple environments. The ideal candidate is a highly skilled engineer with deep experience in Python, ETL processes, and Postgres, capable of transforming raw data into meaningful insights and supporting long-term data strategy. This position offers full ownership of data workflows and the opportunity to collaborate directly with technical leaders, product managers, and analysts to shape a secure, scalable data ecosystem. Responsibilities • Design, build, and maintain data pipelines, data platforms, and automated integrations supporting business intelligence and analytics needs. • Develop, optimize, and monitor ETL processes for moving data between operational and analytical systems. • Model and structure data for scalability, security, and performance. • Collaborate with product managers, analysts, and IT to gather requirements and translate them into effective data engineering solutions. • Ensure the accuracy, reliability, and timeliness of data across environments. • Perform unit and integration testing to validate quality of deliverables. • Diagnose and resolve data-related production issues with urgency and precision. • Document systems, workflows, and data flows to ensure maintainability. • Monitor and improve database health, data quality, and system performance. • Participate in proof-of-concept and pilot initiatives for emerging tools and platforms. • Communicate project progress, risks, and technical decisions clearly to stakeholders. • Mentor team members and contribute to shared data engineering standards and best practices. Required Skills & Experience • 5 – 8+ years of professional experience in data engineering or related roles. • Strong programming proficiency in Python, including data manipulation and automation. • Hands-on experience building ETL pipelines for data lakes or warehouses. • Deep understanding of Postgres and relational database design, optimization, and performance tuning. • Proficiency in SQL, APIs, and data integration between systems. • Familiarity with structured and unstructured data types (CSV, JSON, Excel, etc.). • Practical experience with Linux/Windows environments and understanding of infrastructure interplay (memory, storage, VM/host). • Excellent communication and documentation skills. • Proven ability to work independently and deliver high-quality, on-time solutions in a fast-paced environment. Preferred Qualifications • Experience with Microsoft Fabric or Elastic (ELK) for analytics and data search. • Familiarity with Apache Airflow, Logstash, or similar pipeline orchestration tools. • Understanding of data governance, quality, and security best practices. • Strong analytical mindset and problem-solving skills with attention to detail.