

Contract Role: Data Engineer with Strong ETL Matillion Exp at Secaucus, NJ (Remote)
β - Featured Role | Apply direct with Data Freelance Hub
This role is a long-term contract for a Data Engineer with strong ETL experience in Matillion, Snowflake, and Python, based remotely in Secaucus, NJ. Key responsibilities include data pipeline development, integration, warehousing, processing, and governance.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Secaucus, NJ
-
π§ - Skills detailed
#Data Management #Data Privacy #Datasets #Security #Data Science #Data Pipeline #Batch #Python #Data Security #Data Engineering #Matillion #Automation #Spark (Apache Spark) #Data Warehouse #"ETL (Extract #Transform #Load)" #Hadoop #Data Quality #Snowflake #Data Governance #Data Integrity #Databases #Monitoring #Data Integration #Data Processing #Scala #Compliance #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer
Secaucus, NJ (Remote)
Long Term Contract
Must Have β Matillion ETL, Snowflake, Python
Job Details:
β’ Data Pipeline Development: Design, construct, test, and maintain highly scalable data management systems. Develop and implement architectures that support the extraction, transformation, and loading (ETL) of data from various sources.
β’ Data Integration: Integrate structured and unstructured data from multiple data sources into a unified data system, ensuring data quality and consistency.
β’ Data Warehousing: Build and maintain data warehouses and data lakes to store and retrieve vast amounts of data efficiently. Optimize the performance of databases and queries to meet business needs.
β’ Data Processing: Implement data processing frameworks (e.g., Hadoop, Spark) to process large datasets in real-time or batch processing.
β’ Automation and Monitoring: Automate manual processes, optimize data delivery, and develop data monitoring systems to ensure data integrity and accuracy.
β’ Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data needs and provide technical solutions that meet business requirements.
β’ Data Governance: Ensure data governance policies are followed, including data security, data privacy, and compliance with regulations.
β’ Performance Tuning: Optimize the performance of ETL processes, databases, and data pipelines to handle large volumes of data and reduce processing times.