

Iris Software Inc.
Senior Bigdata Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer on a long-term contract in Whippany, NJ (Hybrid). Key skills include Python, SQL, and big data technologies (Spark, Hadoop). Experience with cloud platforms and data governance is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 20, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Whippany, NJ
-
🧠 - Skills detailed
#Docker #Data Modeling #Big Data #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Hadoop #Version Control #Data Engineering #Datasets #Kubernetes #Data Pipeline #Compliance #Data Quality #GIT #ML (Machine Learning) #AWS (Amazon Web Services) #Scala #Security #Data Science #Data Processing #Azure #SQL (Structured Query Language) #Kafka (Apache Kafka) #Cloud #Airflow #Python #Deployment #Data Governance #Data Transformations
Role description
Greetings!
We are looking for a Big Data Engineer to design, build, and maintain scalable data solutions. This role focuses on developing reliable data pipelines and platforms that support analytics, reporting, and data-driven decision making. The ideal candidate has strong hands-on experience with Python and SQL and is comfortable working with large, complex datasets.
Position: Sr. Big Data Engineer
Location: Whippany NJ (Hybrid)
Contract: Long term contract
Client: One of the largest financial clients.
Responsibilities
• Design, develop, and maintain large-scale data pipelines and data platforms
• Build efficient ETL and ELT processes using Python and SQL
• Optimize data models, queries, and workflows for performance and reliability
• Work with structured and unstructured data from multiple sources
• Collaborate with data scientists, analysts, and software engineers to support analytics and machine learning use cases
• Ensure data quality, consistency, and availability across systems
• Monitor and troubleshoot data pipelines in production environments
• Document data processes, models, and best practices
Required Qualifications
• Strong experience in Python for data processing and pipeline development
• Advanced SQL skills, including query optimization and complex data transformations
• Experience working with big data technologies such as Spark, Hadoop, or similar frameworks
• Solid understanding of data modeling, warehousing, and lakehouse concepts
• Experience with cloud data platforms (AWS, Azure, or Google Cloud)
• Familiarity with version control systems such as Git
Preferred Qualifications
• Experience with workflow orchestration tools such as Airflow or similar
• Knowledge of streaming technologies such as Kafka or equivalent
• Experience with containerization and deployment tools (Docker, Kubernetes)
• Exposure to data governance, security, and compliance best practices
Best Regards,
Greetings!
We are looking for a Big Data Engineer to design, build, and maintain scalable data solutions. This role focuses on developing reliable data pipelines and platforms that support analytics, reporting, and data-driven decision making. The ideal candidate has strong hands-on experience with Python and SQL and is comfortable working with large, complex datasets.
Position: Sr. Big Data Engineer
Location: Whippany NJ (Hybrid)
Contract: Long term contract
Client: One of the largest financial clients.
Responsibilities
• Design, develop, and maintain large-scale data pipelines and data platforms
• Build efficient ETL and ELT processes using Python and SQL
• Optimize data models, queries, and workflows for performance and reliability
• Work with structured and unstructured data from multiple sources
• Collaborate with data scientists, analysts, and software engineers to support analytics and machine learning use cases
• Ensure data quality, consistency, and availability across systems
• Monitor and troubleshoot data pipelines in production environments
• Document data processes, models, and best practices
Required Qualifications
• Strong experience in Python for data processing and pipeline development
• Advanced SQL skills, including query optimization and complex data transformations
• Experience working with big data technologies such as Spark, Hadoop, or similar frameworks
• Solid understanding of data modeling, warehousing, and lakehouse concepts
• Experience with cloud data platforms (AWS, Azure, or Google Cloud)
• Familiarity with version control systems such as Git
Preferred Qualifications
• Experience with workflow orchestration tools such as Airflow or similar
• Knowledge of streaming technologies such as Kafka or equivalent
• Experience with containerization and deployment tools (Docker, Kubernetes)
• Exposure to data governance, security, and compliance best practices
Best Regards,






