

Senior Spark Engineer – Big Data & Cloud Solutions
⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 9, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Ingestion #SQL (Structured Query Language) #AWS EMR (Amazon Elastic MapReduce) #AWS Lambda #Spark SQL #Data Lake #Data Processing #AWS (Amazon Web Services) #FHIR (Fast Healthcare Interoperability Resources) #Data Warehouse #Spark (Apache Spark) #Kafka (Apache Kafka) #Java #Scala #Consulting #Leadership #Delta Lake #Apache Spark #Python #Big Data #REST API #Compliance #Data Governance #Datasets #Cloud #Data Pipeline #Programming #"ETL (Extract #Transform #Load)" #Data Migration #PySpark #REST (Representational State Transfer) #Linux #Lambda (AWS Lambda) #Data Architecture #Computer Science #Batch #Kubernetes #Migration #Security #Data Engineering
Role description
Position Title: Senior Spark Engineer – Big Data & Cloud Solutions
Location: Remote
Position Status: Contract
Position Description:
We’re looking for a Senior Spark Engineer to lead the design and development of scalable, high-performance data solutions in our AWS cloud infrastructure. This is a hands-on leadership role that blends architecture, engineering, and innovation in Apache Spark, Kafka, and big data ecosystems. You’ll drive the development of robust data pipelines, optimize Spark workloads, and mentor technical teams in delivering enterprise-grade solutions.
Key Responsibilities:
• Architect, develop, and deploy data lake and data warehouse solutions using modern tools in AWS
• Design and implement real-time and batch data ingestion pipelines for high-volume datasets
• Lead data migration and transformation initiatives with a focus on scalability and performance
• Automate end-to-end ETL processes across diverse data sources
• Collaborate with cross-functional teams to define and evolve the AWS application framework
• Integrate software applications using REST APIs, secure coding practices, and proven architectural patterns
• Optimize Spark jobs for performance, reliability, and cost-efficiency
• Guide and mentor engineering teams, fostering a culture of technical excellence
Required Skills & Experience:
• 7+ years of experience in big data engineering, data architecture, or cloud-native development
• Deep expertise in Apache Spark, PySpark, Spark Structured Streaming, and Kafka
• Strong programming skills in Python, Java, and SQL
• Experience with Linux/RHEL 8, AWS EMR, and Kubernetes for distributed data processing
• Proven ability to design high-throughput, low-latency pipelines using Kafka
• Hands-on experience with Delta Lake, Hybrid Cloud Architectures, and Spark SQL
• Strong communication skills with the ability to explain complex concepts to technical and non-technical stakeholders
• Demonstrated leadership in coaching and mentoring engineering teams
Preferred Qualifications:
• Experience with HL7 FHIR specifications
• Familiarity with Serverless Architectures (e.g., AWS Lambda, Step Functions)
• Background in data governance, security, and compliance frameworks
Required Education:
Bachelor’s degree in Computer Science, Engineering, or similar
About Seneca Resources:
At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact.
When you work with Seneca, you’re choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.
Position Title: Senior Spark Engineer – Big Data & Cloud Solutions
Location: Remote
Position Status: Contract
Position Description:
We’re looking for a Senior Spark Engineer to lead the design and development of scalable, high-performance data solutions in our AWS cloud infrastructure. This is a hands-on leadership role that blends architecture, engineering, and innovation in Apache Spark, Kafka, and big data ecosystems. You’ll drive the development of robust data pipelines, optimize Spark workloads, and mentor technical teams in delivering enterprise-grade solutions.
Key Responsibilities:
• Architect, develop, and deploy data lake and data warehouse solutions using modern tools in AWS
• Design and implement real-time and batch data ingestion pipelines for high-volume datasets
• Lead data migration and transformation initiatives with a focus on scalability and performance
• Automate end-to-end ETL processes across diverse data sources
• Collaborate with cross-functional teams to define and evolve the AWS application framework
• Integrate software applications using REST APIs, secure coding practices, and proven architectural patterns
• Optimize Spark jobs for performance, reliability, and cost-efficiency
• Guide and mentor engineering teams, fostering a culture of technical excellence
Required Skills & Experience:
• 7+ years of experience in big data engineering, data architecture, or cloud-native development
• Deep expertise in Apache Spark, PySpark, Spark Structured Streaming, and Kafka
• Strong programming skills in Python, Java, and SQL
• Experience with Linux/RHEL 8, AWS EMR, and Kubernetes for distributed data processing
• Proven ability to design high-throughput, low-latency pipelines using Kafka
• Hands-on experience with Delta Lake, Hybrid Cloud Architectures, and Spark SQL
• Strong communication skills with the ability to explain complex concepts to technical and non-technical stakeholders
• Demonstrated leadership in coaching and mentoring engineering teams
Preferred Qualifications:
• Experience with HL7 FHIR specifications
• Familiarity with Serverless Architectures (e.g., AWS Lambda, Step Functions)
• Background in data governance, security, and compliance frameworks
Required Education:
Bachelor’s degree in Computer Science, Engineering, or similar
About Seneca Resources:
At Seneca Resources, we are more than just a staffing and consulting firm, we are a trusted career partner. With offices across the U.S. and clients ranging from Fortune 500 companies to government organizations, we provide opportunities that help professionals grow their careers while making an impact.
When you work with Seneca, you’re choosing a company that invests in your success, celebrates your achievements, and connects you to meaningful work with leading organizations nationwide. We take the time to understand your goals and match you with roles that align with your skills and career path. Our consultants and contractors enjoy competitive pay, comprehensive health, dental, and vision coverage, 401(k) retirement plans, and the support of a dedicated team who will advocate for you every step of the way.