

Skysoft Inc.
Data Engineer (Only Local to Boston, MA)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Boston, MA, requiring 12+ years of experience and expertise in SQL, Snowflake, DataStage, and API development using Java/Python. The contract is hybrid, focusing on data ingestion, ETL pipelines, and cloud data architectures.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
576
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#Data Ingestion #Infrastructure as Code (IaC) #Cloud #Automation #Integration Testing #Jenkins #SQL Server #Security #Data Quality #Data Pipeline #Azure #Linux #MySQL #Documentation #DataStage #SQL (Structured Query Language) #Vault #API (Application Programming Interface) #Python #Java #Data Governance #Snowflake #Jira #Migration #Data Security #Data Engineering #AWS (Amazon Web Services) #Oracle #Data Architecture #Clustering #Aurora #"ETL (Extract #Transform #Load)"
Role description
Job Title: Data Engineer — OneView Datamart & API Enablement
Location: Boston, MA (Hybrid – Local Candidates Only)
Experience: 12+years Must
Only USC/ H1B/ H4 EAD
Role Summary
Design and implement data ingestion and transformation pipelines across multiple database technologies (DataStage, Snowflake, SQL Server, Aurora, MySQL), and build Java/Python-backed Data APIs for downstream consumers. The role involves both hands-on data engineering and API integration responsibilities to support the OneView Datamart and enterprise data initiatives.
Key Responsibilities
• Design and develop ELT/ETL pipelines (using DataStage and Python) into Snowflake, Aurora, or SQL Server, enforcing data contracts and data quality checks.
• Engineer dimensional or data-vault style data models in OneView, focusing on optimization of cost and performance (Snowflake).
• Build and operate Data API services using Java or Python, including pagination, filtering, caching, and authentication.
• Implement CI/CD pipelines using Jenkins, conduct unit/integration testing, and apply infrastructure-as-code (IaC) where applicable.
• Execute migration waves, perform dual-run reconciliations, and support cutover/decommission activities with proper documentation and validation.
Required Skills & Experience
• Strong SQL development skills across SQL Server, Aurora (MySQL/Postgres-compatible), MySQL, and Oracle (SQL/PLSQL).
• Expertise in Snowflake performance optimization (clustering, virtual warehouses) and DataStage ETL processes.
• Hands-on experience developing and integrating API services using Java and Python.
• Experience with JIRA, Jenkins, and Red Hat Linux.
• 10+ years of experience in data engineering, data pipelines, and large-scale data platform development.
Preferred Skills
• Experience with data modelling, data quality automation, and data governance frameworks.
• Knowledge of cloud data architectures (AWS, Azure) and data security best practices.
• Financial or investment domain experience is a plus.
Job Title: Data Engineer — OneView Datamart & API Enablement
Location: Boston, MA (Hybrid – Local Candidates Only)
Experience: 12+years Must
Only USC/ H1B/ H4 EAD
Role Summary
Design and implement data ingestion and transformation pipelines across multiple database technologies (DataStage, Snowflake, SQL Server, Aurora, MySQL), and build Java/Python-backed Data APIs for downstream consumers. The role involves both hands-on data engineering and API integration responsibilities to support the OneView Datamart and enterprise data initiatives.
Key Responsibilities
• Design and develop ELT/ETL pipelines (using DataStage and Python) into Snowflake, Aurora, or SQL Server, enforcing data contracts and data quality checks.
• Engineer dimensional or data-vault style data models in OneView, focusing on optimization of cost and performance (Snowflake).
• Build and operate Data API services using Java or Python, including pagination, filtering, caching, and authentication.
• Implement CI/CD pipelines using Jenkins, conduct unit/integration testing, and apply infrastructure-as-code (IaC) where applicable.
• Execute migration waves, perform dual-run reconciliations, and support cutover/decommission activities with proper documentation and validation.
Required Skills & Experience
• Strong SQL development skills across SQL Server, Aurora (MySQL/Postgres-compatible), MySQL, and Oracle (SQL/PLSQL).
• Expertise in Snowflake performance optimization (clustering, virtual warehouses) and DataStage ETL processes.
• Hands-on experience developing and integrating API services using Java and Python.
• Experience with JIRA, Jenkins, and Red Hat Linux.
• 10+ years of experience in data engineering, data pipelines, and large-scale data platform development.
Preferred Skills
• Experience with data modelling, data quality automation, and data governance frameworks.
• Knowledge of cloud data architectures (AWS, Azure) and data security best practices.
• Financial or investment domain experience is a plus.