Mondo

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Candidates must have 3+ years in ETL development, strong SQL, Python, and Linux skills, and the ability to obtain a security clearance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Big Data #SQL (Structured Query Language) #Cloudera #EDW (Enterprise Data Warehouse) #Data Architecture #Scripting #Airflow #"ETL (Extract #Transform #Load)" #Python #Version Control #Linux #Data Quality #Apache Spark #Spark (Apache Spark) #Data Pipeline #Complex Queries #Security #Documentation #Impala #Computer Science #Data Engineering #Data Warehouse #Cloud #Data Framework #Data Modeling
Role description
We’re seeking a skilled Data Engineer to support and enhance a growing Enterprise Data Warehouse (EDW) environment. This role focuses on modernizing ETL frameworks, optimizing large-scale data pipelines, and providing hands-on production support within a collaborative team setting. The ideal candidate will have strong technical proficiency in ETL, SQL, Python, and Linux, along with a passion for data architecture and performance optimization. Key Responsibilities β€’ Design, develop, and modernize ETL pipelines using Apache Spark and Iceberg β€’ Maintain, optimize, and migrate legacy ETL jobs β€’ Integrate mainframe data using Precisely Connect β€’ Participate in an on-call rotation for production support and troubleshooting β€’ Collaborate closely with cross-functional teams, including data engineers, analysts, and administrators β€’ Ensure data quality, consistency, and performance across the data warehouse β€’ Follow SDLC best practices, version control standards, and documentation protocols Requirements Must-Have Qualifications β€’ 3+ years of experience in ETL development or data engineering β€’ Strong proficiency in SQL (complex queries, performance tuning, data modeling) β€’ Hands-on experience with Python and/or Linux scripting in Spark-based environments β€’ Proven production support experience in enterprise data ecosystems β€’ Demonstrated understanding of data architecture and optimization principles β€’ Ability to learn new tools quickly and adapt to evolving data frameworks β€’ Ability to Obtain a Security Clearance Nice-to-Have Skills β€’ Experience with Cloudera (Hive/Impala), Apache Spark, Iceberg, or Airflow β€’ Familiarity with Precisely Connect for Big Data β€’ Background in financial services or student lending data environments β€’ Bachelor’s degree in Computer Science, Data Engineering, or related field (or equivalent experience) Why This Role This is an excellent opportunity to join a team that’s actively modernizing its data environment β€” leveraging next-generation frameworks and large-scale data platforms. You’ll have the chance to shape a new data warehouse architecture while maintaining the stability of existing pipelines.