Fortune

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Requires 10+ years of experience in big data processing, cloud-native development, and expertise in Apache Spark, AWS, Java, and NoSQL databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
December 10, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Durham, NC
-
🧠 - Skills detailed
#Batch #Apache Spark #Strategy #EC2 #Disaster Recovery #S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #Storage #API (Application Programming Interface) #Java #Databases #Code Reviews #Data Engineering #Cloud #Lambda (AWS Lambda) #Datasets #Observability #Scala #AWS (Amazon Web Services) #Big Data #NoSQL #REST (Representational State Transfer) #Data Processing #Data Pipeline #"ETL (Extract #Transform #Load)"
Role description
Job Summary We are seeking an experienced Senior Big Data & Cloud Engineer to design, build, and deliver advanced API and data solutions that support financial goal planning, investment insights, and projection tools. This role is ideal for a seasoned engineer with 10+ years of hands-on experience in big data processing, distributed systems, cloud-native development, and end-to-end data pipeline engineering. You will work across retail, clearing, and custody platforms, leveraging modern cloud and big data technologies to solve complex engineering challenges. The role involves driving technology strategy, optimizing large-scale data systems, and collaborating across multiple engineering teams. Key Responsibilities Design and develop large-scale data movement services using Apache Spark (EMR) or Spring Batch. Build and maintain ETL workflows, distributed pipelines, and automated batch processes. Develop high-quality applications using Java, Scala, REST, and SOAP integrations. Implement cloud-native solutions leveraging AWS S3, EMR, EC2, Lambda, Step Functions, and related services. Work with modern storage formats and NoSQL databases to support high-volume workloads. Contribute to architectural discussions and code reviews across engineering teams. Drive innovation by identifying and implementing modern data engineering techniques. Maintain strong development practices across the full SDLC. Design and support multi-region disaster recovery (DR) strategies. Monitor, troubleshoot, and optimize distributed systems using advanced observability tools. Required Skills : 10+ years of experience in software/data engineering with strong big data expertise. Proven ability to design and optimize distributed systems handling large datasets. Strong communicator who collaborates effectively across teams. Ability to drive architectural improvements and influence engineering practices. Customer-focused mindset with commitment to delivering high-quality solutions. Adaptable, innovative, and passionate about modern data engineering trends.