

S Piper Staffing LLC
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a fully remote Data Engineer contract position focused on designing and managing data lake solutions using MS Fabric. Key skills include Python, SQL, data integration, and experience with cloud platforms. Contract length and pay rate are unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 9, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Microsoft Power BI #Scala #Data Lake #Data Quality #Data Profiling #Visualization #Apache Spark #DevOps #Data Engineering #BI (Business Intelligence) #Big Data #Spark (Apache Spark) #Deployment #Azure #Data Governance #Model Deployment #Python #Cloud #Databricks #Security #Agile #Programming #Data Storage #SQL (Structured Query Language) #Data Integration #Data Pipeline #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #ML (Machine Learning) #Storage #Databases #Hadoop #DAX
Role description
Title: Data Engineer
Type: Contract / Fully Remote
Location: Fully Remote / U.S. Only / No Sponsorship
Summary:
Our client is looking for innovative professionals who are passionate about driving exceptional outcomes for our customers. You'll be empowered to challenge the status quo and encouraged to think differently, leveraging a growth mindset to deliver tangible results. Here, professional growth isn't just a concept. It is a reality, fueled by continuous learning and a supportive, forward-thinking environment.
Our client builds bespoke software using modern technologies and is on a mission to help their clients flourish with smart solutions to solve their business needs. As a data engineer, you will be responsible for designing, developing, and managing scalable data pipelines that feed into our data lake using MS Fabric. You will work closely with our Lead Data Engineer, analysts, and other stakeholders to ensure that the data infrastructure supports both current and future needs. Your role will involve integrating various data sources, ensuring data quality, and building a robust, scalable data lake architecture.
Key Responsibilities:
β’ Design, implement, and manage data lake solutions using MS Fabric.
β’ Develop and maintain data pipelines to extract, transform, and load (ETL) data from various structured and unstructured sources.
β’ Collaborate with cross-functional teams to understand data requirements and ensure seamless integration of new data sources into the data lake.
β’ Optimize data storage and retrieval processes to improve performance and scalability.
β’ Ensure the integrity, security, and availability of the data lake by implementing best practices in data governance and management.
β’ Perform data profiling, cleansing, and transformation to ensure data quality.
β’ Monitor and troubleshoot data flows to ensure reliable operation of the data pipelines.
β’ Stay up to date with the latest trends and technologies in data engineering, particularly in relation to MS Fabric and data lake architectures.
Requirements:
β’ Proven experience as a Data Engineer, with a strong focus on data lake architecture and development.
β’ Hands-on experience with MS Fabric and building data lakes.
β’ Proficiency in data integration techniques for structured and unstructured data from multiple sources (e.g., APIs, databases, cloud services).
β’ Strong programming skills in Python, SQL, or other relevant languages.
β’ Experience with cloud platforms like Azure, AWS, or Google Cloud for data storage and processing.
β’ Solid understanding of data warehousing, ETL processes, and big data technologies.
β’ Strong experience with Power BI and DAX for data visualization
β’ Knowledge of data governance principles and practices.
β’ Strong problem-solving skills with attention to detail.
β’ Excellent communication and collaboration skills.
Preferred Skills & Experience:
β’ Experience with additional data tools and platforms, such as Apache Spark, Hadoop, or Databricks.
β’ Familiarity with machine learning workflows and model deployment in data lake
β’ Experience working in Agile or DevOps environments.
Title: Data Engineer
Type: Contract / Fully Remote
Location: Fully Remote / U.S. Only / No Sponsorship
Summary:
Our client is looking for innovative professionals who are passionate about driving exceptional outcomes for our customers. You'll be empowered to challenge the status quo and encouraged to think differently, leveraging a growth mindset to deliver tangible results. Here, professional growth isn't just a concept. It is a reality, fueled by continuous learning and a supportive, forward-thinking environment.
Our client builds bespoke software using modern technologies and is on a mission to help their clients flourish with smart solutions to solve their business needs. As a data engineer, you will be responsible for designing, developing, and managing scalable data pipelines that feed into our data lake using MS Fabric. You will work closely with our Lead Data Engineer, analysts, and other stakeholders to ensure that the data infrastructure supports both current and future needs. Your role will involve integrating various data sources, ensuring data quality, and building a robust, scalable data lake architecture.
Key Responsibilities:
β’ Design, implement, and manage data lake solutions using MS Fabric.
β’ Develop and maintain data pipelines to extract, transform, and load (ETL) data from various structured and unstructured sources.
β’ Collaborate with cross-functional teams to understand data requirements and ensure seamless integration of new data sources into the data lake.
β’ Optimize data storage and retrieval processes to improve performance and scalability.
β’ Ensure the integrity, security, and availability of the data lake by implementing best practices in data governance and management.
β’ Perform data profiling, cleansing, and transformation to ensure data quality.
β’ Monitor and troubleshoot data flows to ensure reliable operation of the data pipelines.
β’ Stay up to date with the latest trends and technologies in data engineering, particularly in relation to MS Fabric and data lake architectures.
Requirements:
β’ Proven experience as a Data Engineer, with a strong focus on data lake architecture and development.
β’ Hands-on experience with MS Fabric and building data lakes.
β’ Proficiency in data integration techniques for structured and unstructured data from multiple sources (e.g., APIs, databases, cloud services).
β’ Strong programming skills in Python, SQL, or other relevant languages.
β’ Experience with cloud platforms like Azure, AWS, or Google Cloud for data storage and processing.
β’ Solid understanding of data warehousing, ETL processes, and big data technologies.
β’ Strong experience with Power BI and DAX for data visualization
β’ Knowledge of data governance principles and practices.
β’ Strong problem-solving skills with attention to detail.
β’ Excellent communication and collaboration skills.
Preferred Skills & Experience:
β’ Experience with additional data tools and platforms, such as Apache Spark, Hadoop, or Databricks.
β’ Familiarity with machine learning workflows and model deployment in data lake
β’ Experience working in Agile or DevOps environments.