

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 4+ years of experience in building data pipelines using Hadoop, Hive, PySpark, and Python. Contract length is "unknown," with a pay rate of "unknown." Remote work is allowed. Key skills include AWS S3, Autosys, PowerBI, and Unix scripting.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
456
-
ποΈ - Date discovered
July 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Chandler, AZ
-
π§ - Skills detailed
#Visualization #Database Design #Automation #S3 (Amazon Simple Storage Service) #Hadoop #MySQL #Data Engineering #Deployment #Data Mart #Dremio #Unix #Python #Cloud #Scripting #Big Data #AWS (Amazon Web Services) #Spark (Apache Spark) #Data Integration #BI (Business Intelligence) #Scala #Batch #Storage #PySpark #AWS S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Data Modeling #Data Pipeline #Security #Shell Scripting
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description:
We are seeking a Data Engineer with a minimum of 4 years of hands-on experience in building scalable and efficient data pipelines using modern big data technologies and cloud platforms. This individual will work on end-to-end data engineering tasksβfrom data modeling and pipeline development to orchestration and automationβsupporting key analytics and business intelligence functions.
Key Responsibilities:
β’ Design, model, and build robust data pipelines using big-data technologies such as Hadoop, Hive, PySpark, and Python
β’ Integrate and manage data in Amazon AWS S3 β focusing on object storage, security, and data service integrations
β’ Apply business logic to raw data to transform it into consumable formats for downstream systems and analytics
β’ Automate pipeline processes and ETL workflows using Spark, Python, and Hive
β’ Manage and schedule batch jobs using Autosys
β’ Build and maintain data models and data marts with sound database design principles (MySQL or equivalent)
β’ Develop dashboards and reports using PowerBI and Dremio
β’ Perform scripting and automation tasks using UNIX/Shell scripting
β’ Participate in CI/CD practices to streamline development and deployment
β’ Troubleshoot and resolve data-related issues proactively
β’ Collaborate with cross-functional teams to gather requirements and deliver data solutions
Required Qualifications:
β’ Minimum 4 years of professional experience in data engineering roles
β’ Strong hands-on experience with:
β’ Hadoop, Hive, PySpark, Python
β’ AWS S3 β storage, security, and data integration
β’ Autosys β job scheduling and orchestration
β’ PowerBI, Dremio β reporting and visualization
β’ Unix/Shell scripting, CI/CD pipelines
β’ Solid understanding of data modeling and database design
β’ Proven ability to work independently and take ownership of deliverables
EEO: βMindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of β Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.β