

TALENT Software Services
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown", offering a pay rate of "unknown", and is 100% remote. Key skills include SQL, Python, data modeling, ETL, and experience with GCP and Azure.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
512
-
ποΈ - Date
April 10, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Microsoft Azure #SQL Queries #Python #Cloud #Logging #Azure #Data Ingestion #GCP (Google Cloud Platform) #Metadata #Data Pipeline #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Data Engineering #Data Modeling #Data Quality #Data Exploration #Data Processing #Monitoring #Data Management #Data Mining #Data Transformations #BigQuery #Terraform #AI (Artificial Intelligence) #Data Analysis #Data Warehouse #Databases #Datasets #Clustering #Data Profiling #SQL (Structured Query Language) #Batch
Role description
Are you an experienced Data Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Engineer to work at their company 100% remotely.
Primary Responsibilities/Accountabilities:
β’ Assemble large, complex data sets that meet functional / non-functional business requirements.
β’ Perform complex data analysis and investigation for customer requests to explain results and to make appropriate recommendations.
β’ Problem solver with the initiative to think critically to identify improvement opportunities (error detection, error correction, root cause analysis)
β’ Build processes supporting data transformation, data structures, metadata, dependency and workload management.
β’ A successful history of manipulating, processing and extracting value from large disconnected datasets.
β’ Analyze business objectives and develop data solutions to meet customer needs.
β’ Demonstrated ability to effectively participate in multiple, concurrent projects
β’ Improve and customize current data solutions to meet business functional and non-functional requirements.
β’ Research new and existing data sources in order to contribute to new development, improve data management processes, and make recommendations for data quality initiatives.
β’ Perform periodic data quality reviews for internal and external data.
β’ Ensure timely resolution of queries and data issues.
β’ Look for new ways to find and collect data by researching potential new sources of information.
β’ Work with data and analytics experts to strive for greater functionality in our data systems.
β’ Demonstrated ability to analyze and profile data as a means to address various business problems through leveraging advanced data modeling, source system databases, or data mining techniques.
β’ May provide consultative services to departments/divisions and committees.
β’ Demonstrated application of several problem-solving methodologies, planning techniques, continuous improvement methods, and analytical tools and methodologies (e.g. data analysis, data profiling, modeling, etc.).
β’ Incumbent must have ability to manage a varied workload of projects with multiple priorities and stay current on healthcare trends and enterprise changes.
β’ Contribute to the design, configuration, and support of data, analytics, and AI environments across Google Cloud Platform (GCP) and Microsoft Azure, including Microsoft Fabric
β’ Build and maintain data pipelines to ingest, cleanse, transform, and curate structured and unstructured data
β’ Support batch and nearβreal-time data ingestion and transformation workflows
β’ Use Infrastructure as Code (IaC) tools (e.g., Terraform) to help automate cloud environment provisioning and configuration
β’ Configure and support cloud services related to data ingestion, integration, messaging, CI/CD, and data processing
β’ Assist with data modeling and performance optimization in cloud data warehouses (e.g., partitioning and clustering in BigQuery)
β’ Support the setup and tuning of operational databases or data-serving layers based on defined use cases
β’ Implement and maintain monitoring, logging, and alerting for data pipelines and platforms
β’ Write and maintain data transformations using SQL and Python
β’ Collaborate with engineers, analysts, and product teams in an iterative, product-focused environment
Qualifications:
β’ Interpersonal skills and time management skills.
β’ Requires strong analytical skills and the ability to identify and recommend solutions, advanced computer application skills and a commitment to customer service.
β’ Experience with data analysis, quality, and profiling; including data exploration tools including but not limited to Rapid SQL, AQT, Information Analyzer, and Informatics.
β’ Strong knowledge of SQL required. Ability to identify sets and subsets of information across multiple joins or unions of tables is preferred in addition to writing and troubleshooting SQL queries for data mining
β’ Strong understanding of data modelling concepts
β’ Understand ETL that will aid in verification and testing of data
Are you an experienced Data Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Data Engineer to work at their company 100% remotely.
Primary Responsibilities/Accountabilities:
β’ Assemble large, complex data sets that meet functional / non-functional business requirements.
β’ Perform complex data analysis and investigation for customer requests to explain results and to make appropriate recommendations.
β’ Problem solver with the initiative to think critically to identify improvement opportunities (error detection, error correction, root cause analysis)
β’ Build processes supporting data transformation, data structures, metadata, dependency and workload management.
β’ A successful history of manipulating, processing and extracting value from large disconnected datasets.
β’ Analyze business objectives and develop data solutions to meet customer needs.
β’ Demonstrated ability to effectively participate in multiple, concurrent projects
β’ Improve and customize current data solutions to meet business functional and non-functional requirements.
β’ Research new and existing data sources in order to contribute to new development, improve data management processes, and make recommendations for data quality initiatives.
β’ Perform periodic data quality reviews for internal and external data.
β’ Ensure timely resolution of queries and data issues.
β’ Look for new ways to find and collect data by researching potential new sources of information.
β’ Work with data and analytics experts to strive for greater functionality in our data systems.
β’ Demonstrated ability to analyze and profile data as a means to address various business problems through leveraging advanced data modeling, source system databases, or data mining techniques.
β’ May provide consultative services to departments/divisions and committees.
β’ Demonstrated application of several problem-solving methodologies, planning techniques, continuous improvement methods, and analytical tools and methodologies (e.g. data analysis, data profiling, modeling, etc.).
β’ Incumbent must have ability to manage a varied workload of projects with multiple priorities and stay current on healthcare trends and enterprise changes.
β’ Contribute to the design, configuration, and support of data, analytics, and AI environments across Google Cloud Platform (GCP) and Microsoft Azure, including Microsoft Fabric
β’ Build and maintain data pipelines to ingest, cleanse, transform, and curate structured and unstructured data
β’ Support batch and nearβreal-time data ingestion and transformation workflows
β’ Use Infrastructure as Code (IaC) tools (e.g., Terraform) to help automate cloud environment provisioning and configuration
β’ Configure and support cloud services related to data ingestion, integration, messaging, CI/CD, and data processing
β’ Assist with data modeling and performance optimization in cloud data warehouses (e.g., partitioning and clustering in BigQuery)
β’ Support the setup and tuning of operational databases or data-serving layers based on defined use cases
β’ Implement and maintain monitoring, logging, and alerting for data pipelines and platforms
β’ Write and maintain data transformations using SQL and Python
β’ Collaborate with engineers, analysts, and product teams in an iterative, product-focused environment
Qualifications:
β’ Interpersonal skills and time management skills.
β’ Requires strong analytical skills and the ability to identify and recommend solutions, advanced computer application skills and a commitment to customer service.
β’ Experience with data analysis, quality, and profiling; including data exploration tools including but not limited to Rapid SQL, AQT, Information Analyzer, and Informatics.
β’ Strong knowledge of SQL required. Ability to identify sets and subsets of information across multiple joins or unions of tables is preferred in addition to writing and troubleshooting SQL queries for data mining
β’ Strong understanding of data modelling concepts
β’ Understand ETL that will aid in verification and testing of data




