PROCAL TECHNOLOGIES

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$X per hour." Key skills required include Python, AWS, ETL tools, and NoSQL databases. Experience in data architecture and pipeline design is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date
February 26, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #Data Ingestion #SQS (Simple Queue Service) #Storage #NumPy #Databases #Cloud #Data Storage #Pandas #Python #Data Pipeline #MongoDB #Data Architecture #NoSQL #Kafka (Apache Kafka) #Libraries #DynamoDB #AWS (Amazon Web Services) #Data Access #Scala #SQL Server #SNS (Simple Notification Service) #"ETL (Extract #Transform #Load)" #Datasets
Role description
Detail- JDs; β€’ Understand complete requirement, create Architecture and update all stakeholders. β€’ Create POCs β€’ Manage delivery/release to customer β€’ Develop Services to enable data ingestion from and synchronization with system which exposes required data access mechanisms ensuring near-real-time updates β€’ Ingest data from multiple sources using the python and any other ETL tools β€’ Design and implement an event-driven architecture using AWS EventBridge, Kafka, or SNS/SQS for real-time data streaming β€’ Design, implement, and maintain scalable data pipelines that integrate both on-prem and AWS cloud environments. β€’ Develop efficient Python scripts and applications using libraries like pandas, NumPy, etc., to handle and process large datasets. β€’ Work with various NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB) to support high-performance data storage and retrieval. β€’ Develop and deploy applications in a cloud-native architecture, leveraging modern cloud technologies for scalability and resilience. β€’ Continuously monitor data workflows and systems, troubleshoot issues, and optimize performance for reliability and scalability Transition existing pipeline to MSSQL server β€’ Collaborate with the business application owner on the existing data architecture, including data ingestion, data pipelines, business logic, data consumption patterns, and analytics requirements β€’ Design and document the target data architecture, pipelines, processing and analytics architecture β€’ Identify opportunities for optimization and consolidation β€’ Collaboration with data team on decomposition of business logic and data transformation patterns