Data Engineer ($60/hr)

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
480
-
πŸ—“οΈ - Date discovered
September 9, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Database Design #Hadoop #XML (eXtensible Markup Language) #Data Processing #Spark (Apache Spark) #Business Objects #Scala #Data Modeling #Apache Spark #Python #R #Data Governance #SAS #Datasets #JavaScript #SPSS (Statistical Package for the Social Sciences) #Tableau #Data Pipeline #Programming #CouchDB #BO (Business Objects) #Data Mapping #"ETL (Extract #Transform #Load)" #Qlik #Visualization #SQL Queries #Data Mining #Data Engineering
Role description
We are seeking a highly skilled Data Engineer to join our team and play a critical role in designing, building, and maintaining robust data pipelines and systems. The ideal candidate will have hands-on expertise in Spark, Scala, and SQL, as well as a strong understanding of data modeling, data mining, and database design. This role requires a mix of technical depth, problem-solving ability, and analytical thinking to uncover insights from complex data sets and enable better decision-making across the organization. Key Responsibilities β€’ Design, develop, and maintain scalable data pipelines and systems. β€’ Use automated tools to extract data from primary and secondary sources. β€’ Reverse engineer SQL queries and Scala code to understand functionality. β€’ Clean and validate data by removing corrupted data, fixing coding errors, and resolving quality issues. β€’ Perform in-depth analysis to assess the quality, meaning, and integrity of data. β€’ Filter and refine data by reviewing reports and performance indicators to identify and correct problems. β€’ Use statistical and analytical tools to identify, analyze, and interpret patterns and trends in complex data sets. β€’ Develop and maintain database designs, data models, and techniques for data mining and segmentation. β€’ Collaborate with programmers, engineers, and business stakeholders to identify process improvement opportunities and propose system modifications. β€’ Prepare clear, insightful reports and dashboards for management, outlining trends, patterns, and predictions. β€’ Contribute to data governance strategies and ensure data consistency across systems. β€’ Support ongoing epics and projects by providing data mapping, query support, and technical expertise. Must-Have Skills β€’ Strong knowledge of Spark, Scala, and SQL. β€’ Ability to reverse engineer SQL queries and Scala code. β€’ Experience in database design, data models, and data mining techniques. β€’ Strong analytical skills to identify, interpret, and visualize patterns and trends in large datasets. Good-to-Have Skills β€’ Hands-on experience with data processing platforms such as Hadoop, CouchDB, and Apache Spark. β€’ Experience with statistical tools and programming languages like Python, R, SAS, SPSS, and Excel. β€’ Familiarity with reporting tools and frameworks (Business Objects, ETL frameworks, JavaScript, XML). β€’ Proficiency with data visualization platforms such as Tableau, Qlik, or similar tools. β€’ Knowledge of how to create and apply algorithms to large datasets for problem-solving. β€’ Exposure to data governance practices and large-scale data mapping (e.g., ACRS, MARSHA).