Intellibus

Senior Data Engineer – AWS & Kafka

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – AWS & Kafka, with a contract length of "unknown," offering $65-70/hour. Key skills include SQL, AWS, and Kafka, requiring 10+ years of experience in data engineering, particularly in FinTech environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
May 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Herndon, VA
-
🧠 - Skills detailed
#PostgreSQL #Linux #PySpark #Spark (Apache Spark) #SQL (Structured Query Language) #Data Architecture #Agile #Java #Airflow #"ETL (Extract #Transform #Load)" #Batch #Snowflake #AWS (Amazon Web Services) #Migration #Observability #Programming #Data Engineering #dbt (data build tool) #Python #Data Migration #Unix #Scala #Data Processing #Kafka (Apache Kafka) #Cloud #Data Ingestion #Data Modeling
Role description
At Intellibus, we engineer platforms that power some of the world’s leading FinTech and Financial Trading organizations. Our Platform Engineering Team works on large-scale cloud and data modernization initiatives involving high-volume distributed systems, real-time data movement, cloud-native engineering, and enterprise-scale data platforms. We are currently looking for strong Data Engineers with deep expertise in SQL, AWS, and Kafka to join high-impact engineering initiatives supporting mission-critical financial platforms. What We Offer: A dynamic environment where your skills will make a direct impact. The opportunity to work with cutting-edge technologies and innovative projects. A collaborative team that values your passion and focus. We are looking for Engineers who can • Build scalable cloud-native data platforms on AWS. • Design and optimize large-scale ETL/ELT pipelines. • Develop real-time and batch data processing systems using Kafka and distributed data technologies. • Engineer high-performance SQL-based data solutions for enterprise-scale workloads. • Work on data ingestion, transformation, migration, and warehousing initiatives. • Partner closely with engineering, platform, and business teams to solve complex data challenges. • Improve reliability, observability, scalability, and operational excellence across the data ecosystem. • Contribute to cloud modernization and platform engineering efforts in fast-paced FinTech environments. Key Skills & Qualifications: • Strong SQL expertise (advanced querying, optimization, performance tuning). • Hands-on AWS engineering experience. • Strong Kafka / event-driven systems experience. • Experience building scalable ETL/ELT pipelines. • Python or Java programming experience. • Experience with data warehousing and distributed data systems. • Strong understanding of cloud-native data architecture. What we are looking for • 10+ years of Data Engineering experience. • Strong ownership mindset and problem-solving ability. • Experience working on large-scale enterprise data platforms. • Ability to work in fast-moving engineering environments. • Strong communication and collaboration skills. • Experience supporting production-grade systems and mission-critical workloads. Preferred Experience • Snowflake. • Spark / PySpark. • PostgreSQL. • Airflow / dbt. • Data migration initiatives. • Real-time streaming platforms. • FinTech, Banking, Trading, or Capital Markets environments. • Agile engineering teams. Technologies We Work With AWS | Kafka | SQL | Snowflake | Python | Java | Spark | PostgreSQL | Airflow | ETL | Data Warehousing | Unix/Linux | Data Modeling | Cloud Engineering | Real-Time Streaming Compensation $65-70$/Hour Our Process • Schedule a 15 min Video Call with someone from our Team • 4 Proctored GQ Tests (< 2 hours) • 30-45 min Final Video Interview • Receive Job Offer If you are interested in reaching out to us, please apply, and our team will contact you within the hour.