

TalentBridge
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires 5-7+ years of experience, proficiency in Databricks and SAP Datasphere, and strong SQL skills. Remote work with onsite interviews in KY.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 31, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#SAP Hana #Storage #Cloud #Computer Science #Data Lakehouse #Data Quality #BI (Business Intelligence) #Data Engineering #Tableau #Airflow #ADF (Azure Data Factory) #Scala #Big Data #Data Security #Databricks #Data Modeling #Python #AWS (Amazon Web Services) #Data Integration #Security #SAP #GCP (Google Cloud Platform) #Data Lake #Programming #Spark (Apache Spark) #Data Pipeline #Delta Lake #Documentation #Azure #PySpark #Data Governance #Data Science #Microsoft Power BI #MLflow #Data Architecture #Data Warehouse #SAP BW #SQL (Structured Query Language) #Data Orchestration #Azure Data Factory
Role description
Position: Data Engineer
Location: Remote (Onsite interview required in KY)
Contract
We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem. However, demonstration of aptitude to quickly adapt to new technologies from past experience is also highly valued. This role will play a critical part in ensuring the accessibility, reliability, and performance of our enterprise data infrastructure to enable impactful business intelligence and data science initiatives.
How You Make an Impact (Job Accountabilities)
β’ Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.
β’ Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholdersβ goals.
β’ Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms
β’ Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.
β’ Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.
β’ Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.
What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)
β’ Bachelor's or Masterβs degree in Computer Science, Data Engineering, Information Systems, or a related field.
β’ 5-7+ years of experience in a data engineering or related role.
β’ Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.
β’ Proficiency in SQL and at least one programming language (e.g., Python, Scala).
β’ Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.
β’ Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.
Competencies Desired
β’ Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).
β’ Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.
β’ SAP BW or SAP HANA experience is a plus.
β’ Experience with BI tools like Power BI or Tableau.
β’ Understanding of data governance frameworks and data security best practices.
β’ Exposure to data lakehouse architecture and real-time streaming data pipelines.
β’ Certifications in Databricks, SAP, or cloud platforms are advantageous.
Position: Data Engineer
Location: Remote (Onsite interview required in KY)
Contract
We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem. However, demonstration of aptitude to quickly adapt to new technologies from past experience is also highly valued. This role will play a critical part in ensuring the accessibility, reliability, and performance of our enterprise data infrastructure to enable impactful business intelligence and data science initiatives.
How You Make an Impact (Job Accountabilities)
β’ Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.
β’ Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholdersβ goals.
β’ Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms
β’ Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.
β’ Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.
β’ Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.
What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)
β’ Bachelor's or Masterβs degree in Computer Science, Data Engineering, Information Systems, or a related field.
β’ 5-7+ years of experience in a data engineering or related role.
β’ Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.
β’ Proficiency in SQL and at least one programming language (e.g., Python, Scala).
β’ Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.
β’ Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.
Competencies Desired
β’ Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).
β’ Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.
β’ SAP BW or SAP HANA experience is a plus.
β’ Experience with BI tools like Power BI or Tableau.
β’ Understanding of data governance frameworks and data security best practices.
β’ Exposure to data lakehouse architecture and real-time streaming data pipelines.
β’ Certifications in Databricks, SAP, or cloud platforms are advantageous.






