SOLTECH

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract-to-hire basis, offering a hybrid schedule in Stamford, CT. Key skills include SQL, ETL/ELT pipeline development, data visualization, and familiarity with crypto markets.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 24, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Stamford, CT
-
🧠 - Skills detailed
#Data Pipeline #Data Quality #AI (Artificial Intelligence) #Data Engineering #"ETL (Extract #Transform #Load)" #Scala #SQL (Structured Query Language) #Leadership #Monitoring #Visualization #API (Application Programming Interface) #Documentation #Databases #Automation #Semantic Models #Data Warehouse #Data Ingestion #CRM (Customer Relationship Management) #Datasets
Role description
Our client is hiring a Data Analytics Engineer to build and scale the data foundation that powers strategic decision-making across the organization. This role blends data engineering, analytics, and automation, enabling teams to access clean, reliable, and actionable data. You’ll work cross-functionally with stakeholders across investing, portfolio management, operations, and leadership to transform business questions into scalable data productsβ€”including dashboards, pipelines, and standardized metrics. This is an opportunity to make a measurable impact in a fast-paced, data-driven environment while leveraging modern tools and AI to enhance efficiency and insight generation. Hybrid schedule in Stamford, CT office. Long-term contract-to-hire. What You’ll Do: β€’ Establish and standardize key metrics, data definitions, and reporting logic to create a trusted, unified data source β€’ Identify data gaps and recommend improvements to enhance upstream data quality and processes β€’ Build and maintain scalable data pipelines to ingest, clean, transform, and validate data from multiple sources (CRM systems, portfolio data, operational tools, third-party datasets, etc.) β€’ Design and manage analytical layers, including curated datasets and semantic models, to enable self-service analytics β€’ Develop and operate ETL/ELT workflows, including API integrations, scheduling, monitoring, and data quality checks β€’ Create dashboards and visualizations that translate complex data into clear, actionable insights β€’ Automate recurring reporting and data workflows; apply AI tools to accelerate analysis and improve data operations while ensuring accuracy and control β€’ Collaborate with cross-functional teams to define requirements and deliver scalable data solutions β€’ Clearly communicate data logic, assumptions, and insights to both technical and non-technical audiences What You Bring: β€’ Experience working with data warehouses/lakehouses and building curated datasets for analytics β€’ Strong SQL skills and expertise with relational databases, including end-to-end troubleshooting β€’ Proven experience building and maintaining ETL/ELT pipelines, including orchestration and data validation β€’ Hands-on experience with API integrations and data ingestion frameworks β€’ Ability to design impactful dashboards and data visualizations that support decision-making β€’ A proactive, ownership-driven mindset with strong attention to detail β€’ Comfort working in a fast-paced, evolving environment β€’ Familiarity with crypto markets and on-chain analytics (protocol metrics, wallet/contract activity, etc.) β€’ Experience building scalable semantic layers and documentation β€’ Practical experience using AI tools to enhance analytics and automation, with a strong focus on validation and accuracy If you’re passionate about turning data into insight and building systems that drive real business impact, this is a great opportunity to join a forward-thinking team.