

Sr. Database Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Database Engineer in Metro Park, NJ, for 12 months at a competitive pay rate. Requires 10+ years of experience, expertise in Spark, Python, Scala, and advanced database skills, including ETL processes and cloud platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 13, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Data Engineering #Spark (Apache Spark) #Delta Lake #Programming #AWS (Amazon Web Services) #Database Performance #Data Processing #Oracle #DevOps #Scala #Apache Spark #Database Systems #Data Ingestion #MongoDB #Big Data #Security #Databases #Database Schema #Python #Indexing #Data Pipeline #Azure #SQL (Structured Query Language) #MySQL #Data Science #"ETL (Extract #Transform #Load)" #Hadoop #Database Design #Distributed Computing #Data Integrity #Schema Design #Complex Queries #Replication #NoSQL #Data Modeling #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #PostgreSQL #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Sr. Database Engineer
Location: Metro Park, NJ ( Onsite / Hybrid )
Duration : 12 Months
Exp Req : 10+
Job Description:
Skills Mandatory :: Spark, Python, Scala and Advanced database skills
Job Summary:
We are seeking a highly skilled Senior Database Engineer with expertise in big data processing, advanced database design, and programming in Python and Scala.
The ideal candidate will have a strong background in Apache Spark, distributed computing, and modern database technologies, and will play a key role in building scalable, high-performance data platforms and pipelines that drive business insights and decisions.
Key Responsibilities:
Design, implement, and maintain robust and scalable data pipelines using Apache Spark and Scala/Python.
Develop and optimize complex queries, stored procedures, and data transformation logic for both structured and semi-structured data.
Architect and manage relational (e.g., PostgreSQL, Oracle, MySQL) and NoSQL (e.g., MongoDB, Cassandra) database systems.
Ensure data integrity, performance tuning, and security across database environments.
Collaborate with data scientists, data engineers, and product teams to integrate data into analytical and operational systems.
Implement data modeling best practices and contribute to the database schema design process.
Troubleshoot and resolve database-related issues, ensuring high availability and reliability.
Participate in the evaluation of new technologies and make recommendations for future growth and scalability.
Mandatory Skills & Experience:
5+ years of experience as a Database Engineer, Data Engineer, or similar role.
Strong expertise in Apache Spark for large-scale data processing.
Proficiency in Python and Scala for building data-driven applications and tools.
Advanced knowledge of SQL, data modeling, indexing, and query optimization.
Hands-on experience with distributed databases and big data ecosystems.
Experience in working with ETL/ELT processes and building data ingestion pipelines.
Deep understanding of database performance tuning, backup & recovery, and replication strategies.
Preferred Qualifications:
Experience with cloud platforms like AWS, GCP, or Azure.
Familiarity with Delta Lake, Hadoop, or Kafka.
Knowledge of CI/CD pipelines, data versioning, or DevOps for data.