

Clustera
Senior / Principal Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior / Principal Data Engineer on a flexible, remote contract. Requires 7-10 years of experience in data platforms, proficiency in SQL, Python/Scala, and expertise with cloud technologies like Snowflake and BigQuery.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 14, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Modeling #SQL (Structured Query Language) #C++ #Infrastructure as Code (IaC) #Data Ingestion #Data Pipeline #Storage #Scala #Data Quality #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Cloud #Apache Spark #Snowflake #Kafka (Apache Kafka) #Airflow #BigQuery #Data Engineering #GitHub #Datasets #Python #Data Lake #dbt (data build tool) #Terraform #Data Processing #Spark (Apache Spark) #Linux #Databricks #Programming #"ETL (Extract #Transform #Load)" #Redshift
Role description
Role: Senior / Principal Data Engineer
Location: 100% Remote (Global)
Job Type: Contract, Hourly
Hours: Flexible
About Us & The Opportunity
We are a small, ambitious team of deeply experienced software engineers building distributed applications for Linux. Our culture is built on a foundation of mutual respect, technical excellence, and a shared passion for solving complex problems. Our core philosophy is simple: we hire professionals and we trust you to choose the best tools and methods for the job.
We believe that the best work comes from trusting talented people to do what they do best, without the distractions of corporate bureaucracy. We are looking for a seasoned Data Engineer to join our team of peers. This is a role for someone who loves to build robust, scalable data platforms. There are no unnecessary meetings, no layers of management, and no rigid schedules. You will have a high degree of autonomy and will be expected to own our data infrastructure from architecture to implementation. Weβre a friendly, relaxed, and highly technical group that values a stress-free environment where you can focus on the fun parts of engineering.
We are committed to the well-being and growth of our team members, both professionally and personally. We don't care about your age, location, time zone, or gender. We care about your work, your ideas, your track record, and your productivity (and you too, of course).
What You'll Be Doing
β’ Architecting and building scalable and reliable data infrastructure from the ground up.
β’ Designing, implementing, and optimizing robust ETL/ELT pipelines to process and transform large datasets efficiently.
β’ Owning the data platform, including data warehousing, data lakes, and the associated processing frameworks.
β’ Ensuring data quality, integrity, and accessibility for downstream consumers like analytics and machine learning teams.
β’ Solving challenging technical problems related to data ingestion, storage, performance, and scalability in a distributed environment.
β’ Collaborating directly with other senior-level engineers in a focused, highly productive environment to build a cohesive data ecosystem.
Required Skills & Experience
β’ A minimum of 7-10 years of professional experience building and maintaining production-scale data platforms.
β’ Demonstrable mastery of data engineering fundamentals. You must have a deep, practical understanding of:
β’ Data modeling and data warehousing/lake architecture.
β’ Building and maintaining highly available data pipelines.
β’ Data quality and governance best practices.
β’ Expert-level proficiency in SQL and strong programming skills in at least one language like Python or Scala.
β’ Deep, hands-on experience with large-scale data processing frameworks (e.g., Apache Spark, Flink).
β’ Proven experience with cloud-based data technologies (e.g., Snowflake, BigQuery, Redshift, S3, Databricks).
β’ Experience with workflow orchestration tools (e.g., Airflow, Dagster, Prefect).
β’ A proven track record of successfully architecting and delivering complex data projects.
β’ Ability to work independently and productively with minimal supervision.
Bonus Skills
β’ Experience with real-time streaming data technologies (e.g., Kafka, Kinesis, Flink SQL).
β’ Familiarity with Infrastructure as Code (e.g., Terraform, CloudFormation).
β’ Knowledge of modern data stack tools for quality and transformation (e.g., dbt, Great Expectations).
β’ Experience with modern C++ (C++17 or newer) in a data-intensive environment.
What We Offer
β’ Complete Autonomy & Trust: You are the expert. We'll give you the goals and trust you to deliver.
β’ A "No-Meetings" Culture: We value deep work. We only meet if it's absolutely critical and efficient. Your time is for building things that matter.
β’ Flexible Hours: We are a results-oriented team. Work the hours that make you most productive, from anywhere in the world.
β’ A Team of Experts: You will work exclusively with other highly-experienced and dedicated professionals who you can learn from and collaborate with.
β’ Stress-Free Environment: We focus on engineering, not politics or bureaucracy. We actively support your well-being and personal development.
β’ Competitive Contract Rate: We pay well for exceptional talent.
How to Apply
If you are a passionate Data Engineer who values craftsmanship and autonomy, we would love to hear from you. Please send your resume along with a brief note detailing:
1. Your GitHub links or other professional works.
1. Your resume (PDF or website preferred).
1. A data engineering project you are particularly proud of. Please describe the business problem, the architecture you designed, the technologies used, the scale of the data, and the project's impact.
1. Your desired hourly contract rate.
ο»Ώ
Thanks for your time, and we wish you the best in your search. Please email steve.sperandeo@clustera.io for applications, or reach out on LinkedIn. Please ensure that you answer each of the questions listed above. Thank you.
Role: Senior / Principal Data Engineer
Location: 100% Remote (Global)
Job Type: Contract, Hourly
Hours: Flexible
About Us & The Opportunity
We are a small, ambitious team of deeply experienced software engineers building distributed applications for Linux. Our culture is built on a foundation of mutual respect, technical excellence, and a shared passion for solving complex problems. Our core philosophy is simple: we hire professionals and we trust you to choose the best tools and methods for the job.
We believe that the best work comes from trusting talented people to do what they do best, without the distractions of corporate bureaucracy. We are looking for a seasoned Data Engineer to join our team of peers. This is a role for someone who loves to build robust, scalable data platforms. There are no unnecessary meetings, no layers of management, and no rigid schedules. You will have a high degree of autonomy and will be expected to own our data infrastructure from architecture to implementation. Weβre a friendly, relaxed, and highly technical group that values a stress-free environment where you can focus on the fun parts of engineering.
We are committed to the well-being and growth of our team members, both professionally and personally. We don't care about your age, location, time zone, or gender. We care about your work, your ideas, your track record, and your productivity (and you too, of course).
What You'll Be Doing
β’ Architecting and building scalable and reliable data infrastructure from the ground up.
β’ Designing, implementing, and optimizing robust ETL/ELT pipelines to process and transform large datasets efficiently.
β’ Owning the data platform, including data warehousing, data lakes, and the associated processing frameworks.
β’ Ensuring data quality, integrity, and accessibility for downstream consumers like analytics and machine learning teams.
β’ Solving challenging technical problems related to data ingestion, storage, performance, and scalability in a distributed environment.
β’ Collaborating directly with other senior-level engineers in a focused, highly productive environment to build a cohesive data ecosystem.
Required Skills & Experience
β’ A minimum of 7-10 years of professional experience building and maintaining production-scale data platforms.
β’ Demonstrable mastery of data engineering fundamentals. You must have a deep, practical understanding of:
β’ Data modeling and data warehousing/lake architecture.
β’ Building and maintaining highly available data pipelines.
β’ Data quality and governance best practices.
β’ Expert-level proficiency in SQL and strong programming skills in at least one language like Python or Scala.
β’ Deep, hands-on experience with large-scale data processing frameworks (e.g., Apache Spark, Flink).
β’ Proven experience with cloud-based data technologies (e.g., Snowflake, BigQuery, Redshift, S3, Databricks).
β’ Experience with workflow orchestration tools (e.g., Airflow, Dagster, Prefect).
β’ A proven track record of successfully architecting and delivering complex data projects.
β’ Ability to work independently and productively with minimal supervision.
Bonus Skills
β’ Experience with real-time streaming data technologies (e.g., Kafka, Kinesis, Flink SQL).
β’ Familiarity with Infrastructure as Code (e.g., Terraform, CloudFormation).
β’ Knowledge of modern data stack tools for quality and transformation (e.g., dbt, Great Expectations).
β’ Experience with modern C++ (C++17 or newer) in a data-intensive environment.
What We Offer
β’ Complete Autonomy & Trust: You are the expert. We'll give you the goals and trust you to deliver.
β’ A "No-Meetings" Culture: We value deep work. We only meet if it's absolutely critical and efficient. Your time is for building things that matter.
β’ Flexible Hours: We are a results-oriented team. Work the hours that make you most productive, from anywhere in the world.
β’ A Team of Experts: You will work exclusively with other highly-experienced and dedicated professionals who you can learn from and collaborate with.
β’ Stress-Free Environment: We focus on engineering, not politics or bureaucracy. We actively support your well-being and personal development.
β’ Competitive Contract Rate: We pay well for exceptional talent.
How to Apply
If you are a passionate Data Engineer who values craftsmanship and autonomy, we would love to hear from you. Please send your resume along with a brief note detailing:
1. Your GitHub links or other professional works.
1. Your resume (PDF or website preferred).
1. A data engineering project you are particularly proud of. Please describe the business problem, the architecture you designed, the technologies used, the scale of the data, and the project's impact.
1. Your desired hourly contract rate.
ο»Ώ
Thanks for your time, and we wish you the best in your search. Please email steve.sperandeo@clustera.io for applications, or reach out on LinkedIn. Please ensure that you answer each of the questions listed above. Thank you.