Tential Solutions

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, contract length over 6 months, with a pay rate of "unknown." Work is remote. Key skills include AWS, Python, CI/CD practices, and Infrastructure as Code (IaC) with Terraform.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 4, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Fort Mill, SC
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Infrastructure as Code (IaC) #Jira #Airflow #Data Pipeline #Batch #Cloud #Python #Data Layers #Data Engineering #Anomaly Detection #Data Governance #GitHub #Data Catalog #AWS (Amazon Web Services) #Terraform #Data Quality #Data Processing #ML (Machine Learning)
Role description
The position is part of a data-focused team working on the β€œT-Bar” (Transaction Books and Records) product, a critical data store for positions, transactions, and tax lots. The team is transitioning from a legacy data center to a cloud-native AWS environment, aiming to modernize infrastructure and processes by next summer. Senior Engineer Must-Have Requirements β€’ Experience with Batch and Streaming Data Processing: Ability to handle intraday use cases with trading partners, including micro-batches. β€’ CI/CD and Developer Discipline: Proficiency in code commit practices, including tying commits to Jira tickets for traceability and cherry-picking. β€’ AWS Familiarity: Experience working within AWS environments, particularly with data pipelines and foundational data layers. β€’ Python: Strong skills in Python for data processing and logic conversion (e.g., rewriting stored procedures from legacy systems). β€’ Infrastructure as Code (IaC): Experience with Terraform for deploying and managing infrastructure, particularly for carving out separate AWS accounts. Nice-to-Have Skills β€’ Orchestration Tools: Familiarity with Airflow, Dagster, or similar tools for centralized orchestration (current setup includes step functions, Eventbridge, and an in-house eventing system). β€’ AI/ML Integration: Exposure to AI tools like GitHub Copilot or Cursor for code development, unit test generation, or data quality checks (e.g., anomaly detection). β€’ Data Governance: Experience with data catalogs, producing/consuming assets, or tools like AWS DataZone for governance and contract-based testing. β€’ Event-Driven Architecture: Understanding of event-driven systems, as the team aims to move toward this model in the future. #DICE