Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a 12-month contract in Malvern, PA (Hybrid). Key skills include AWS (Glue, Lambda, S3), ETL processes, and data transformation. Requires 8+ years of experience in data engineering and compliance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 28, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Malvern, PA
-
🧠 - Skills detailed
#PySpark #Python #Version Control #AWS Glue #Anomaly Detection #"ETL (Extract #Transform #Load)" #Java #Spark (Apache Spark) #Pandas #Lambda (AWS Lambda) #Compliance #GitHub #Data Governance #Golang #S3 (Amazon Simple Storage Service) #Data Engineering #Data Mapping #BitBucket #Documentation #Cloud #Data Quality #StepFunctions #AWS (Amazon Web Services) #Big Data #Data Integration
Role description
Hello There, My name is Himanshu Sharma, and I serve as the Recruitment Lead at Kanak-IT INC. I am reaching out to share an excellent career opportunity for the role of Big Data Engineer with our esteemed client. If you are interested then please share your updated resume at Himanshu01@kanakits.com . Job Description Position : Big Data Engineer Location : Malvern, PA Hybrid Duration : Initial 12 months + options to extend - Long term contract role Responsibilities β€’ Collaborate on the Smartstream cash reconciliation platform, known as TLM, to support data integration and transformation workflows. β€’ Design and implement ETL processes using tools like AWS Glue, Spark, or Python to transform raw data into structured formats compatible with Smartstream's ingestion requirements. β€’ Conduct data mapping exercises to align legacy data structures with modernized attributes stored in S3 buckets, ensuring consistency and accuracy across systems. β€’ Ensure data quality and lineage through validation checks, anomaly detection, and documentation of data assumptions β€’ Integrate data sources into cloud platforms such as AWS, leveraging services like S3, Glue, Lambda β€’ Work closely with product owner and business to understand data requirements, translate them into technical specifications β€’ Implement data governance principles, including version control, access management, and compliance with Vanguard and GIFS standards Qualification: (8+ minimum experience) β€’ AWS Skills - Glue Lambda S3 KMS Stepfunctions General Skills - Data Transformation Tools like Pandas and Pyspark Cloudformation Cloudblocks Python Bitbucket/Github Coder (optional) Java (optional) Golang (optional) Cobol/JBL (optional)