

Data Engineer ($55/hr)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer ($55/hr) with 8–12 years of experience, focusing on Python and AWS. Contract length is unspecified, with responsibilities in ETL development, data integration, and performance tuning. Strong problem-solving and collaboration skills required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date discovered
August 12, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Irvine, CA
-
🧠 - Skills detailed
#Python #Athena #Data Lake #Leadership #Compliance #Apache NiFi #Data Governance #Talend #Data Warehouse #Libraries #Spark (Apache Spark) #Data Architecture #Data Integration #Informatica #NumPy #Data Analysis #Data Integrity #Pandas #Data Processing #Documentation #AWS (Amazon Web Services) #Data Engineering #Redshift #Lambda (AWS Lambda) #Data Science #"ETL (Extract #Transform #Load)" #Scala #Data Pipeline #NiFi (Apache NiFi) #AWS Glue #PySpark #Cloud #S3 (Amazon Simple Storage Service)
Role description
We are seeking an experienced and highly skilled Data Engineer with a strong background in Python and AWS to design, develop, and optimize data pipelines and integration processes. This role requires both deep technical expertise and the ability to collaborate effectively across teams to deliver high-quality, scalable data solutions.
Key Responsibilities
• ETL Design & Development: Architect, build, and maintain robust ETL pipelines to efficiently extract, transform, and load data from diverse sources into data warehouses and data lakes.
• Data Integration: Integrate structured and unstructured data using modern tools and frameworks such as AWS Glue, Informatica, Talend, Apache NiFi, or similar technologies.
• Optimization & Performance Tuning: Continuously enhance ETL processes for maximum performance, scalability, and reliability while reducing processing time and resource consumption.
• Python Development: Utilize Python (including libraries such as pandas, NumPy, and PySpark) to build scalable, reusable, and high-performance data processing scripts and workflows.
• Cloud Platform Expertise: Design and implement cloud-based data solutions using AWS services (e.g., AWS Glue, S3, Lambda, Redshift, Athena).
• Collaboration: Partner with data analysts, data scientists, and business stakeholders to ensure solutions align with organizational objectives.
• Technical Leadership: Mentor junior engineers, share best practices, and foster a collaborative and innovative engineering culture.
• Documentation & Compliance: Maintain clear documentation for all data processes and ensure adherence to data governance, regulatory, and compliance requirements.
• Troubleshooting: Proactively diagnose and resolve complex data-related issues to maintain data integrity and availability.
Qualifications
• Experience: 8–12 years of professional experience in ETL development, data integration, and data engineering.
• Technical Skills:
• Proficiency in Python, including pandas, NumPy, and PySpark.
• Strong knowledge of AWS data services and cloud-based data architectures.
• Hands-on experience with ETL tools (e.g., AWS Glue, Talend, Informatica, Apache NiFi).
• Additional Skills:
• Strong problem-solving abilities and attention to detail.
• Excellent communication and collaboration skills.
• Experience with performance tuning and optimization of large-scale data pipelines.
We are seeking an experienced and highly skilled Data Engineer with a strong background in Python and AWS to design, develop, and optimize data pipelines and integration processes. This role requires both deep technical expertise and the ability to collaborate effectively across teams to deliver high-quality, scalable data solutions.
Key Responsibilities
• ETL Design & Development: Architect, build, and maintain robust ETL pipelines to efficiently extract, transform, and load data from diverse sources into data warehouses and data lakes.
• Data Integration: Integrate structured and unstructured data using modern tools and frameworks such as AWS Glue, Informatica, Talend, Apache NiFi, or similar technologies.
• Optimization & Performance Tuning: Continuously enhance ETL processes for maximum performance, scalability, and reliability while reducing processing time and resource consumption.
• Python Development: Utilize Python (including libraries such as pandas, NumPy, and PySpark) to build scalable, reusable, and high-performance data processing scripts and workflows.
• Cloud Platform Expertise: Design and implement cloud-based data solutions using AWS services (e.g., AWS Glue, S3, Lambda, Redshift, Athena).
• Collaboration: Partner with data analysts, data scientists, and business stakeholders to ensure solutions align with organizational objectives.
• Technical Leadership: Mentor junior engineers, share best practices, and foster a collaborative and innovative engineering culture.
• Documentation & Compliance: Maintain clear documentation for all data processes and ensure adherence to data governance, regulatory, and compliance requirements.
• Troubleshooting: Proactively diagnose and resolve complex data-related issues to maintain data integrity and availability.
Qualifications
• Experience: 8–12 years of professional experience in ETL development, data integration, and data engineering.
• Technical Skills:
• Proficiency in Python, including pandas, NumPy, and PySpark.
• Strong knowledge of AWS data services and cloud-based data architectures.
• Hands-on experience with ETL tools (e.g., AWS Glue, Talend, Informatica, Apache NiFi).
• Additional Skills:
• Strong problem-solving abilities and attention to detail.
• Excellent communication and collaboration skills.
• Experience with performance tuning and optimization of large-scale data pipelines.