

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4+ years of experience in building data pipelines using Hadoop, Hive, PySpark, and Python. Contract length is unspecified, with a focus on AWS S3, ETL automation, and data visualization using PowerBI.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
464
-
🗓️ - Date discovered
August 12, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Chandler, AZ
-
🧠 - Skills detailed
#Deployment #Python #AWS S3 (Amazon Simple Storage Service) #Storage #Shell Scripting #Data Modeling #Scripting #Batch #Unix #Spark (Apache Spark) #Data Integration #Automation #Data Mart #AWS (Amazon Web Services) #BI (Business Intelligence) #Data Engineering #Hadoop #Visualization #"ETL (Extract #Transform #Load)" #Scala #Big Data #Data Pipeline #Dremio #Database Design #Security #MySQL #PySpark #Cloud #S3 (Amazon Simple Storage Service)
Role description
Job Description:
We are seeking a Data Engineer with a minimum of 4 years of hands-on experience in building scalable and efficient data pipelines using modern big data technologies and cloud platforms. This individual will work on end-to-end data engineering tasks—from data modeling and pipeline development to orchestration and automation—supporting key analytics and business intelligence functions.
Key Responsibilities:
• Design, model, and build robust data pipelines using big-data technologies such as Hadoop, Hive, PySpark, and Python
• Integrate and manage data in Amazon AWS S3 – focusing on object storage, security, and data service integrations
• Apply business logic to raw data to transform it into consumable formats for downstream systems and analytics
• Automate pipeline processes and ETL workflows using Spark, Python, and Hive
• Manage and schedule batch jobs using Autosys
• Build and maintain data models and data marts with sound database design principles (MySQL or equivalent)
• Develop dashboards and reports using PowerBI and Dremio
• Perform scripting and automation tasks using UNIX/Shell scripting
• Participate in CI/CD practices to streamline development and deployment
• Troubleshoot and resolve data-related issues proactively
• Collaborate with cross-functional teams to gather requirements and deliver data solutions
Required Qualifications:
• Minimum 4 years of professional experience in data engineering roles
• Strong hands-on experience with:
• Hadoop, Hive, PySpark, Python
• AWS S3 – storage, security, and data integration
• Autosys – job scheduling and orchestration
• PowerBI, Dremio – reporting and visualization
• Unix/Shell scripting, CI/CD pipelines
• Solid understanding of data modeling and database design
• Proven ability to work independently and take ownership of deliverables
EEO: “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Job Description:
We are seeking a Data Engineer with a minimum of 4 years of hands-on experience in building scalable and efficient data pipelines using modern big data technologies and cloud platforms. This individual will work on end-to-end data engineering tasks—from data modeling and pipeline development to orchestration and automation—supporting key analytics and business intelligence functions.
Key Responsibilities:
• Design, model, and build robust data pipelines using big-data technologies such as Hadoop, Hive, PySpark, and Python
• Integrate and manage data in Amazon AWS S3 – focusing on object storage, security, and data service integrations
• Apply business logic to raw data to transform it into consumable formats for downstream systems and analytics
• Automate pipeline processes and ETL workflows using Spark, Python, and Hive
• Manage and schedule batch jobs using Autosys
• Build and maintain data models and data marts with sound database design principles (MySQL or equivalent)
• Develop dashboards and reports using PowerBI and Dremio
• Perform scripting and automation tasks using UNIX/Shell scripting
• Participate in CI/CD practices to streamline development and deployment
• Troubleshoot and resolve data-related issues proactively
• Collaborate with cross-functional teams to gather requirements and deliver data solutions
Required Qualifications:
• Minimum 4 years of professional experience in data engineering roles
• Strong hands-on experience with:
• Hadoop, Hive, PySpark, Python
• AWS S3 – storage, security, and data integration
• Autosys – job scheduling and orchestration
• PowerBI, Dremio – reporting and visualization
• Unix/Shell scripting, CI/CD pipelines
• Solid understanding of data modeling and database design
• Proven ability to work independently and take ownership of deliverables
EEO: “Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”