

Matlen Silver
Sr Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include 10-12 years in Oracle SQL, PySpark development, and experience with large-scale data projects in financial services. A Bachelor's or Master's in Computer Science or related field is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Migration #Data Engineering #Leadership #Storage #Big Data #Scala #Data Integrity #Programming #YARN (Yet Another Resource Negotiator) #SQL (Structured Query Language) #Security #"ETL (Extract #Transform #Load)" #Datasets #Oracle #PySpark #Data Storage #Data Analysis #HDFS (Hadoop Distributed File System) #Forecasting #Data Pipeline #Database Design #Hadoop #Data Architecture #Spark (Apache Spark) #Computer Science
Role description
Enterprise Finance Technologys GWD Technology team is working on and implementing innovative solutions to improve forecasting processes and support operational excellence. This role will be responsible for working closely with the technical leads and Product Owners to deliver support data, analytics and reporting functionalities in the GWD platform. This right candidate will be responsible for hands-on application development to support the current and target process across the SDLC - design, build, test, deploy support.
Required Skills and Experience we are Looking For:
Technical Expertise:
Significant 10-12 years of experience in Oracle SQL programming and development
Deep understanding of PySpark development principles, including RDDs, DataFrames, transformations, and actions, with ability to write optimized, scalable Spark jobs.
Strong grasp of object-oriented programming concepts and how they apply within PySpark for modular, reusable, and maintainable code.
Experience designing and implementing data pipelines using PySpark and related big data tools, ensuring efficient ingestion, processing, and delivery of large datasets.
Hands-on knowledge of Hadoop ecosystem components (HDFS, YARN) and Hive for distributed data storage and query optimization.
Experience in deep data analysis, triaging, identifying data gaps, patterns, and behaviors to support enhancements and user requests.
Experience working with large data sets (~100 million records) and optimizing performance for scalability.
Ensure data integrity, security, and performance in modernized database environments.
Ability to troubleshoot and optimize Spark jobs for performance, including partitioning strategies, caching, and understanding of Spark execution plans.
Ability to lead migration of legacy database modules over to big-data stack
Soft Skills:
Strong analytical and problem-solving skills.
Excellent communication and collaboration abilities.
Leadership and team collaboration skills.
Qualifications:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Extensive experience (minimum 8-10 years) in data architecture, database design, or data engineering roles, with a focus on big data.
Experience working with large-scale data projects, preferably within the financial services industry.
Enterprise Finance Technologys GWD Technology team is working on and implementing innovative solutions to improve forecasting processes and support operational excellence. This role will be responsible for working closely with the technical leads and Product Owners to deliver support data, analytics and reporting functionalities in the GWD platform. This right candidate will be responsible for hands-on application development to support the current and target process across the SDLC - design, build, test, deploy support.
Required Skills and Experience we are Looking For:
Technical Expertise:
Significant 10-12 years of experience in Oracle SQL programming and development
Deep understanding of PySpark development principles, including RDDs, DataFrames, transformations, and actions, with ability to write optimized, scalable Spark jobs.
Strong grasp of object-oriented programming concepts and how they apply within PySpark for modular, reusable, and maintainable code.
Experience designing and implementing data pipelines using PySpark and related big data tools, ensuring efficient ingestion, processing, and delivery of large datasets.
Hands-on knowledge of Hadoop ecosystem components (HDFS, YARN) and Hive for distributed data storage and query optimization.
Experience in deep data analysis, triaging, identifying data gaps, patterns, and behaviors to support enhancements and user requests.
Experience working with large data sets (~100 million records) and optimizing performance for scalability.
Ensure data integrity, security, and performance in modernized database environments.
Ability to troubleshoot and optimize Spark jobs for performance, including partitioning strategies, caching, and understanding of Spark execution plans.
Ability to lead migration of legacy database modules over to big-data stack
Soft Skills:
Strong analytical and problem-solving skills.
Excellent communication and collaboration abilities.
Leadership and team collaboration skills.
Qualifications:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Extensive experience (minimum 8-10 years) in data architecture, database design, or data engineering roles, with a focus on big data.
Experience working with large-scale data projects, preferably within the financial services industry.






