EnIn Systems

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in NJ, offering a contract of unspecified length at an hourly rate. Requires 12+ years of experience, strong skills in Python, SQL, AWS, Azure, and big data technologies, along with relevant certifications.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
400
-
πŸ—“οΈ - Date
March 21, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New Jersey, United States
-
🧠 - Skills detailed
#Apache Spark #Deployment #Datasets #Kubernetes #Data Governance #Security #ADF (Azure Data Factory) #S3 (Amazon Simple Storage Service) #Computer Science #Airflow #Data Processing #Data Quality #Data Architecture #Data Analysis #Data Science #"ETL (Extract #Transform #Load)" #Dataflow #Scala #Compliance #AWS (Amazon Web Services) #Snowflake #Azure #GCP (Google Cloud Platform) #Hadoop #Redshift #Spark (Apache Spark) #Data Pipeline #Agile #Data Warehouse #Data Engineering #Data Management #Programming #Big Data #Python #Metadata #Batch #Automation #NoSQL #SQL (Structured Query Language) #Java #Leadership #Version Control #Data Lake #ML (Machine Learning) #AWS S3 (Amazon Simple Storage Service) #BigQuery #PostgreSQL #GIT #Data Modeling #MySQL #Databricks #Lambda (AWS Lambda) #Docker #Synapse #Azure Data Factory #Cloud #Database Design #Databases #Kafka (Apache Kafka) #DevOps
Role description
Senior Data Engineer Experience: 12+ years Location: NJ Role Overview: We are looking for an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and data architecture. The ideal candidate will have strong expertise in data processing, cloud platforms, and modern data engineering tools, enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain robust, scalable data pipelines (batch and real-time). Build and optimize data architectures, data lakes, and data warehouses. Collaborate with data analysts, data scientists, and business stakeholders to deliver high-quality datasets. Ensure data quality, integrity, and security across systems. Develop ETL/ELT processes for large-scale data processing. Optimize performance of data workflows and queries. Implement data governance, metadata management, and best practices. Mentor junior data engineers and review code for quality and efficiency. Work with DevOps teams for CI/CD and deployment automation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related roles. Strong programming skills in Python, Java, or Scala. Hands-on experience with SQL and database design. Experience with big data technologies like Apache Spark, Hadoop, or Kafka. Expertise in ETL tools and frameworks. Strong experience with cloud platforms such as: AWS (S3, Redshift, Glue, Lambda) Azure (Data Factory, Synapse, Databricks) Google Cloud (BigQuery, Dataflow) Familiarity with data modeling techniques (star/snowflake schema). Experience with orchestration tools like Airflow. Knowledge of containerization tools like Docker and Kubernetes. Preferred Qualifications Experience with real-time data streaming. Knowledge of data governance and compliance frameworks. Exposure to machine learning pipelines. Certifications in cloud platforms (AWS/Azure/GCP). Soft Skills Strong problem-solving and analytical thinking. Excellent communication and stakeholder management skills. Ability to work in agile environments. Leadership and mentoring capabilities. Key Tools & Technologies Programming: Python, Scala, Java Databases: PostgreSQL, MySQL, NoSQL Big Data: Spark, Hadoop, Kafka Cloud: AWS, Azure, GCP Orchestration: Airflow Version Control: Git