Motion Recruitment

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month contract, hybrid location in Iselin, NJ. Requires 5+ years in Data Engineering, proficiency in Python, PySpark, SQL, and experience in regulated environments. Knowledge of GCP and financial crimes is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Snowflake #BigQuery #Data Modeling #DevOps #Python #Datasets #Scala #Agile #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Data Security #Migration #Apache Spark #Security #Data Processing #Data Transformations #Kafka (Apache Kafka) #Data Quality #GCP (Google Cloud Platform) #Data Engineering #SQL (Structured Query Language) #Airflow #PySpark #Databricks #Spark (Apache Spark)
Role description
Job-title: Data Engineer / Data Platform Engineer Duration: 12 months with higher possibility of extension or conversion to full time Location: (Hybrid) ISELIN New Jersey United States - (Local W-2 candidates only) Key Responsibilities • Build and maintain batch and/or streaming data pipelines • Develop scalable data transformations using Python and PySpark • Optimize large-scale data processing leveraging Apache Spark architecture • Collaborate with business and technical teams to define data models and datasets • Ingest and process data from multiple sources (transactional, case management, reference data) • Implement data quality checks and ensure audit readiness • Contribute to modernization initiatives (legacy to in-house platform migration) • Document pipelines, workflows, and operational processes • Work in an Agile environment supporting sprint-based delivery Required Skills • 5+ years of experience in Data Engineering / ETL / Data Platforms • Strong hands-on experience with Python, PySpark (Apache Spark), and SQL • Experience with large-scale data processing and performance tuning • Solid understanding of data modeling, lineage, and governance • Experience working in regulated environments • Strong communication and stakeholder management skills Nice to Have • Experience with GCP (Dataproc, BigQuery, GCS) • Familiarity with Airflow, Kafka, Databricks, Snowflake, or BigQuery • Experience with CI/CD pipelines and DevOps practices • Knowledge of data security (PII handling, encryption, access control) • Background in Financial Crimes (AML, Fraud, KYC, etc.)