

Covenant Consulting
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, offering a 6+ month remote contract with an open pay rate. Key skills include Apache Airflow, Python, AWS, and data warehousing. Experience in healthcare data solutions is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
November 21, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Kansas City Metropolitan Area
-
🧠 - Skills detailed
#Data Engineering #Data Analysis #Python #GCP (Google Cloud Platform) #Consulting #"ETL (Extract #Transform #Load)" #Snowflake #Redshift #Time Series #Airflow #Scripting #Java #AWS (Amazon Web Services) #Big Data #SageMaker #Data Warehouse #Data Governance #Data Quality #Databricks #Data Processing #Documentation #Visualization #Data Access #Tableau #Data Architecture #Data Design #Cloud #Agile #API (Application Programming Interface) #Microsoft Power BI #Azure #Data Integration #Security #Amazon Redshift #ML (Machine Learning) #Compliance #NoSQL #DMP (Data Management Platform) #Apache Airflow #SQL (Structured Query Language) #Scala #Schema Design #Automation #Databases #Grafana #Data Modeling #BI (Business Intelligence) #Data Pipeline #Datasets
Role description
NO 3rd PARTIES / NO C2C PLEASE
Sr. Data Engineer (remote)
Terms: 6+ month contract/consulting engagement (option to extend contract or convert to full time status)
Salary/Rate: Open
Location: Remote (core team is in Kansas City)
Summary:
Seeking a talented Sr. Data Engineer that will be responsible for designing and implementing scalable data pipelines for our organization, ideal role for someone that is passionate about data and technology. This position requires a deep understanding of data architectures, data warehousing, databases, and analytics tools with the goal to create an efficient system for collecting, processing, analyzing, and visualizing large amounts of data from various sources.
Goals for this role are to ensure the accuracy of data, promote data quality, transform data into more useful formats, and democratize data enabling fellow employees to party on that data.
Our client is a cloud-based healthcare solutions provider created to simplify and enhance administrative processes by utilizing cutting edge technologies. Combining over a decade of expertise in both the healthcare and technology industries, they offer the first fully Automated Workflow Optimization solution that will drastically reduce processing time and complexity while minimizing the turnaround time for stop-loss quoting, contracting, and accounting. In short, they are committed to develop smarter automation to your existing workflow.
Responsibilities:
• Build data systems: Design and construct data products and services, including databases, data pipelines, and data models. Build out new API integrations to support continuing increases in data volume and complexity. Identify, design, and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Build required infrastructure for optimal extraction, transformation, and loading (ETL) of data from various data sources using AWS and SQL technologies.
• Collect and store data: Acquire datasets that meet business needs, and ensure data is stored securely. Assemble large, complex sets of data that meet non-functional and functional business requirements.
• Data Operations / Testing & Maintenance: Implement processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Write unit/integration tests, contributes to engineering wiki, and documents work. Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Design data integrations and data quality framework. Test and maintain data systems and infrastructure.
• Collaborate with others: Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Work with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues.
• Ensure compliance: Ensure compliance with data governance, security policies, regulatory agencies, and partner requirements.
• Select technology: Choose the appropriate technology for a company's needs, such as SQL and NoSQL databases, and cloud-based warehouse technologies. Define company data assets (data models) to populate data models.
• Write code for required customization.
• Develop tools: Create new data validation methods and data analysis tools. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition.
Skills & Qualifications:
• Deep experience with data/DMP/ETL workflow orchestration platforms – specifically Apache Airflow
• Experience with Python and Java
• Experience with big data service frameworks (e.g., Snowflake, Databricks, SageMaker)
• Experience with document databases (e.g., elastic, couch, mongo)
• Experience with time series databases (e.g., Influx, druid, timestream)
• Working knowledge of visualization and reporting tools (e.g., tableau, Power BI, Quick sight, Grafana)
• Understanding of data warehouse and Extract, Transform, Load (ETL) tools like Amazon Redshift
• Experience with cloud computing tools such as AWS, Azure, and GCP
• SQL experience (NoSQL experience is a plus)
• Experience with schema design and dimensional data modeling
• Experience with automation and scripting
• Knowledge of machine learning
• Knowledge of basic data visualization in Excel, Tableau and Power BI
• Ability in managing and communicating data warehouse plans to internal stakeholders
• Experience designing, building, and maintaining data processing systems
• Experience working with either a Map Reduce or an MPP system on any size/scale
• Knowledge of best practices and IT operations in an always-up, always-available service
• Experience with or knowledge of Agile Software Development methodologies
• Process oriented with great documentation skills
NO 3rd PARTIES / NO C2C PLEASE
Sr. Data Engineer (remote)
Terms: 6+ month contract/consulting engagement (option to extend contract or convert to full time status)
Salary/Rate: Open
Location: Remote (core team is in Kansas City)
Summary:
Seeking a talented Sr. Data Engineer that will be responsible for designing and implementing scalable data pipelines for our organization, ideal role for someone that is passionate about data and technology. This position requires a deep understanding of data architectures, data warehousing, databases, and analytics tools with the goal to create an efficient system for collecting, processing, analyzing, and visualizing large amounts of data from various sources.
Goals for this role are to ensure the accuracy of data, promote data quality, transform data into more useful formats, and democratize data enabling fellow employees to party on that data.
Our client is a cloud-based healthcare solutions provider created to simplify and enhance administrative processes by utilizing cutting edge technologies. Combining over a decade of expertise in both the healthcare and technology industries, they offer the first fully Automated Workflow Optimization solution that will drastically reduce processing time and complexity while minimizing the turnaround time for stop-loss quoting, contracting, and accounting. In short, they are committed to develop smarter automation to your existing workflow.
Responsibilities:
• Build data systems: Design and construct data products and services, including databases, data pipelines, and data models. Build out new API integrations to support continuing increases in data volume and complexity. Identify, design, and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Build required infrastructure for optimal extraction, transformation, and loading (ETL) of data from various data sources using AWS and SQL technologies.
• Collect and store data: Acquire datasets that meet business needs, and ensure data is stored securely. Assemble large, complex sets of data that meet non-functional and functional business requirements.
• Data Operations / Testing & Maintenance: Implement processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Write unit/integration tests, contributes to engineering wiki, and documents work. Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Design data integrations and data quality framework. Test and maintain data systems and infrastructure.
• Collaborate with others: Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Work with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues.
• Ensure compliance: Ensure compliance with data governance, security policies, regulatory agencies, and partner requirements.
• Select technology: Choose the appropriate technology for a company's needs, such as SQL and NoSQL databases, and cloud-based warehouse technologies. Define company data assets (data models) to populate data models.
• Write code for required customization.
• Develop tools: Create new data validation methods and data analysis tools. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition.
Skills & Qualifications:
• Deep experience with data/DMP/ETL workflow orchestration platforms – specifically Apache Airflow
• Experience with Python and Java
• Experience with big data service frameworks (e.g., Snowflake, Databricks, SageMaker)
• Experience with document databases (e.g., elastic, couch, mongo)
• Experience with time series databases (e.g., Influx, druid, timestream)
• Working knowledge of visualization and reporting tools (e.g., tableau, Power BI, Quick sight, Grafana)
• Understanding of data warehouse and Extract, Transform, Load (ETL) tools like Amazon Redshift
• Experience with cloud computing tools such as AWS, Azure, and GCP
• SQL experience (NoSQL experience is a plus)
• Experience with schema design and dimensional data modeling
• Experience with automation and scripting
• Knowledge of machine learning
• Knowledge of basic data visualization in Excel, Tableau and Power BI
• Ability in managing and communicating data warehouse plans to internal stakeholders
• Experience designing, building, and maintaining data processing systems
• Experience working with either a Map Reduce or an MPP system on any size/scale
• Knowledge of best practices and IT operations in an always-up, always-available service
• Experience with or knowledge of Agile Software Development methodologies
• Process oriented with great documentation skills






