

Hallmark Global Solutions Ltd
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with strong Databricks and PySpark experience, focusing on migrating Ab Initio workflows to scalable PySpark pipelines. Long-term, onsite in Wilmington, DE, with a pay rate of "unknown." Requires ETL/ELT development and Snowflake expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 28, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Wilmington, DE
-
π§ - Skills detailed
#Data Processing #UAT (User Acceptance Testing) #Deployment #Data Lake #Data Engineering #Data Pipeline #Cloud #Data Quality #Databricks #Data Governance #Batch #Migration #Data Lineage #.Net #Snowflake #PySpark #"ETL (Extract #Transform #Load)" #Ab Initio #Spark (Apache Spark) #Scala
Role description
π Hiring: Databricks Engineer (PySpark & Data Lake)
π Wilmington, DE ( IN Person Interview) | π’ 5 Days Onsite | β³ Long-Term Project
Weβre looking for a strong Data Engineer to drive the modernization of legacy ETL systems by migrating Ab Initio workflows into scalable PySpark pipelines on Databricks.
This is a great opportunity to work on large-scale data transformation, building cloud-native architectures while ensuring high performance, data quality, and reliability.
π― Key Responsibilities
πΉ Migrate legacy ETL workflows (Ab Initio β PySpark on Databricks)
πΉ Design & develop scalable, modular data pipelines
πΉ Build ETL/ELT pipelines integrating Snowflake & other data sources
πΉ Support batch & near real-time data processing
πΉ Create data lineage & optimize workflows
πΉ Perform testing, validation & UAT support
πΉ Drive deployment, migration & system cutover strategies
π‘ What Weβre Looking For
β Strong experience with Databricks & PySpark
β Hands-on in ETL/ELT pipeline development
β Experience with Snowflake / Data Lake architectures
β Background in legacy ETL migration (Ab Initio preferred)
β Strong understanding of data governance, optimization & performance
π© If you or someone in your network is a fit, feel free to reach out or share your resume at salomon@hgtechinc.net
π Hiring: Databricks Engineer (PySpark & Data Lake)
π Wilmington, DE ( IN Person Interview) | π’ 5 Days Onsite | β³ Long-Term Project
Weβre looking for a strong Data Engineer to drive the modernization of legacy ETL systems by migrating Ab Initio workflows into scalable PySpark pipelines on Databricks.
This is a great opportunity to work on large-scale data transformation, building cloud-native architectures while ensuring high performance, data quality, and reliability.
π― Key Responsibilities
πΉ Migrate legacy ETL workflows (Ab Initio β PySpark on Databricks)
πΉ Design & develop scalable, modular data pipelines
πΉ Build ETL/ELT pipelines integrating Snowflake & other data sources
πΉ Support batch & near real-time data processing
πΉ Create data lineage & optimize workflows
πΉ Perform testing, validation & UAT support
πΉ Drive deployment, migration & system cutover strategies
π‘ What Weβre Looking For
β Strong experience with Databricks & PySpark
β Hands-on in ETL/ELT pipeline development
β Experience with Snowflake / Data Lake architectures
β Background in legacy ETL migration (Ab Initio preferred)
β Strong understanding of data governance, optimization & performance
π© If you or someone in your network is a fit, feel free to reach out or share your resume at salomon@hgtechinc.net






