MBO Partners

Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer, remote, with a contract lasting until September 2026. Pay rate is competitive. Key skills include 5+ years in data pipelines, 3+ years with Spark and Kafka. A Bachelor's degree and Public Trust clearance are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Lake #JSON (JavaScript Object Notation) #Schema Design #Batch #Databricks #Scala #SQL (Structured Query Language) #Java #Datasets #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Database Schema #Data Lakehouse #PySpark #Python #Data Quality #Spark (Apache Spark) #Data Pipeline #Security #Leadership #NiFi (Apache NiFi) #AWS (Amazon Web Services) #Docker #Observability #Data Engineering #Kubernetes #Microservices
Role description
MBO Partners is a deep jobs platform that connects and enables independent professionals and microbusiness owners to do business safely and effectively with enterprise organizations. Its unmatched experience and industry leadership enable it to operate on the forefront of the independent economy and consistently advance the next way of working. Sr. Data Engineer Location: Remote Anticipated Hours: 40 per week Duration: As soon as cleared through September 2026 Security Clearance: Must be able to obtain and maintain a Public Trust Job Description: We’re looking for an experienced data engineer who thrives on solving applied problems with code, modern tooling, and scalable infrastructure. You’ll work with large, complex datasets, build robust ETL/ELT pipelines, and optimize data flows using technologies like Databricks, Spark, and Kafka. In this role, you’ll grow your technical expertise, apply best practices and use tools like Spark, Kafka, EKS, and others. With your drive to establish processes and lead technological innovation, you’ll make a lasting impact on the civil market. Join us. The world can’t wait. You Have: β€’ 5+ years of experience designing, building, and maintaining data pipelines in production environments β€’ 3+ years of experience developing with Spark (Databricks preferred), PySpark, or similar distributed systems β€’ 3+ years of experience with data lakehouse or warehouse platforms, schema design, and query optimization β€’ 3+ years of experience processing data using streaming (Kafka, Kinesis, etc.) and batch methods β€’ Proficiency in Python and SQL (Scala or Java a plus) β€’ Experience implementing best practices for data quality, testing, and observability β€’ Strong understanding of structured/unstructured data formats (Parquet, Avro, JSON, Delta) β€’ Knowledge of data, information, and message exchange structures and standards β€’ Ability to obtain and maintain a Public Trust or Suitability/Fitness determination based on client requirements β€’ Bachelor's degree Nice If You Have: β€’ 3+ years of experience in data analytics β€’ Experience designing data flows that leverage a medallion architecture β€’ Experience with containerization and orchestration (Docker, Kubernetes, EKS) β€’ Experience with Kafka β€’ Experience with Nifi β€’ Experience using AWS β€’ Knowledge of microservices and integrating with data services β€’ Knowledge of database schema design β€’ Master's degree Eligibility Requirements: β€’ Legal authorization to work in the U.S. is required. β€’ As a contractor, including remote contractors, you may be required to complete a background check. As a contractor, you will be paid for the time you work; this does not include paid time off (PTO) or holidays. If you participate in our Payroll Services (W2) engagement, you may be eligible for Paid Sick Leave (PSL), depending on your work location and state-specific regulations.