Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience, focusing on data infrastructure and ETL pipelines. Contract length is unspecified, with a pay rate of "unknown." Remote or NYC on-site work is available. Key skills include SQL, Python, and cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
1000
-
πŸ—“οΈ - Date discovered
August 15, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Data Access #Storage #Agile #PostgreSQL #Databases #Database Schema #Snowflake #ML (Machine Learning) #BI (Business Intelligence) #AI (Artificial Intelligence) #BigQuery #Kubernetes #MySQL #Data Lake #Datasets #Data Modeling #Monitoring #Data Pipeline #Airflow #Docker #AWS (Amazon Web Services) #GIT #Python #Cloud #Data Integration #HTML (Hypertext Markup Language) #Luigi #Compliance #Data Science #Data Analysis #Data Security #Data Warehouse #Scripting #Scala #Security #Terraform #Data Engineering #Data Quality #SQL (Structured Query Language) #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Observability #Database Performance #Redshift #Azure
Role description
``` About the Company Client is seeking a Senior Data Engineer with deep expertise in data operations, infrastructure for analytics, and database performance optimization to join our Investment Technology team. This role plays a critical part in building and maintaining the data backbone that powers internal applications, sustainability reporting, investment analytics, and decision-making platforms across Apollo’s asset classes. About the Role You will be responsible for architecting and operating scalable, reliable data infrastructure that supports both operational and analytical workloads. This includes managing ETL pipelines, optimizing storage and access patterns, and supporting structured data modeling for use by analysts, data scientists, and application developers. Responsibilities β€’ Design, build, and maintain robust data infrastructure to support analytics, reporting, and sustainability workflows β€’ Own the architecture and administration of relational and cloud-native databases (e.g., PostgreSQL, Snowflake, Redshift, MySQL) β€’ Build and manage ETL/ELT pipelines for ingesting and transforming data across systems and third-party sources β€’ Optimize database schemas, indexes, partitioning strategies, and storage for both query performance and cost-efficiency β€’ Enable and support analytics platforms by providing clean, well-documented, queryable datasets for downstream use (BI, dashboards, AI/ML) β€’ Implement and monitor data quality, governance, and access control policies β€’ Collaborate with data analysts, ESG specialists, and application developers to streamline data access and analytical readiness β€’ Automate and scale platform operations using infrastructure-as-code, containerization, and cloud-native orchestration tools β€’ Establish and manage data observability, pipeline monitoring, and alerting for reliability and integrity β€’ Support ESG data integration for internal and external sustainability disclosures, investment analysis, and regulatory reporting Qualifications β€’ 8+ (or 5+ with exceptional expertise) years of experience in data engineering, data platform infrastructure, or analytics-focused engineering β€’ Strong command of SQL and database optimization techniques β€’ Experience with data warehouse systems (e.g., Snowflake, Redshift, BigQuery) and OLAP/OLTP hybrid design β€’ Proficiency with Python or similar for scripting, transformation, and data operations β€’ Familiarity with data pipeline frameworks (e.g., Airflow, dbt, Luigi) and orchestration practices β€’ Hands-on experience with cloud platforms (e.g., AWS, Azure) and cloud-native data services β€’ Experience building data platforms for analytics, BI, or machine learning use cases β€’ Knowledge of CI/CD, Git, and Agile methodologies applied to data workflows β€’ Solid understanding of data security, access control, and compliance Preferred Skills β€’ Experience working with ESG/sustainability datasets or regulatory data (e.g., MSCI, CDP, SFDR) β€’ Knowledge of investment data models, portfolio risk data, or financial instrument structures β€’ Exposure to data versioning, columnar formats (Parquet, ORC), or data lake architectures β€’ Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation) and containerization (e.g., Docker, Kubernetes) β€’ Experience with data observability, monitoring pipelines, and alerting frameworks Pay range and compensation package Remote or NYC On-site (US-based preferred) Equal Opportunity Statement Client is committed to diversity and inclusivity. ```