Insight Global

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract-to-hire basis, paying $65-$75/hr, remote. Requires 7+ years in Data Engineering, proficiency in AWS, Python, SQL, and experience with ETL/ELT processes and HIPAA compliance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
October 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Storage #Scala #Security #Compliance #Elasticsearch #Disaster Recovery #RDS (Amazon Relational Database Service) #Storage #Deployment #Business Analysis #ML (Machine Learning) #Datasets #Data Pipeline #Data Quality #Data Engineering #Data Processing #Microservices #SQL (Structured Query Language) #Airflow #"ETL (Extract #Transform #Load)" #Amazon RDS (Amazon Relational Database Service) #Databases #AWS (Amazon Web Services) #Indexing #Data Lifecycle #Python #Data Privacy #Redshift
Role description
Job Description: Our Healthcare Insurance Client is building a new 4 person Sr. Data Engineering team to support growth in 2 major portfolios within their Flagship Secure Member Portal. This team will help handle interrupt work and self service work. These positions are remote work from home and will be contract to hire. As a Senior Data Engineer you’ll be an integral part of our team, helping to design, build, and maintain the infrastructure and systems that support the effective collection, storage, and processing of large volumes of data. In this exciting role, you’ll have the opportunity to develop robust data pipelines that effortlessly ingest, transform, and load (ETL/ELT) data from a variety of sources into our digital platforms. Your dedication to ensuring data quality, integrity, and security throughout the data lifecycle will be key, as will your efforts to optimize our data storage solutions for top-notch performance and scalability. Key Responsibilities: • Help design and implement scalable data pipelines and architectures that support low-latency and high-performance data processing. • Support and enhance existing data processing platforms deployed in AWS, ensuring optimal performance and scalability. • Collaborate closely with software engineers, product managers, and business analysts to understand data requirements and deliver effective solutions. • Contribute to the development of best practices and standards for data engineering and machine learning, promoting efficiency and quality across projects. • Implement backup, disaster recovery, and failover procedures for mission-critical data systems using AWS services like Amazon RDS. • Help manage and scale Elasticsearch clusters for efficient indexing and retrieval of large datasets for micro-services. • Ingest data from various sources, such a flat files, streaming systems, and RESTful APIs • Ensure compliance with data privacy regulations, such as HIPAA, to protect sensitive information. Required Skills and Experience: • 7+ years of experience in a Data Engineering role, with a proven track record of successfully delivering complex, large-scale, multi-team data projects. • 5+ years Proficiency in working with diverse data infrastructures, including relational databases (e.g., SQL), and column stores (e.g., Redshift). • 3+ years of experience in building scalable data pipelines using scheduling tools like Airflow. • 5+ years of proven ability to document processes, architecture designs, deployment pipelines, development methods, and troubleshooting techniques. • 5+ years of proficiency in Python with a focus on scaling data pipelines. Compensation: $65/hr - $75/hr (W2 ONLY)