Senior Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer on a contract basis, offering a competitive pay rate. Candidates should have 7+ years of experience with Spark, Hadoop, and cloud technologies (AWS, Azure), alongside a Bachelor’s degree in a related field.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 13, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#Cloud #Azure #Data Mart #Scripting #SQL (Structured Query Language) #Data Ingestion #Shell Scripting #Cloudera #Snowflake #Computer Science #Data Engineering #Leadership #Deployment #Jenkins #PostgreSQL #Spark (Apache Spark) #Python #Programming #Terraform #Data Warehouse #AWS (Amazon Web Services) #Hadoop #Data Lake #NiFi (Apache NiFi) #Security #Docker #Azure cloud #Kafka (Apache Kafka) #MariaDB #"ETL (Extract #Transform #Load)" #Kubernetes #MySQL #Databases #Data Science #Big Data
Role description
If you are passionate about building reliable data platforms and thrive in a fast paced collaborative environment, come join our dynamic team. As a Senior Big Data Engineer, you will design and scale next-generation data platforms with Spark, Hadoop, Hive, Kafka, and cloud technologies like AWS, Azure, and Snowflake. In this role, you will develop pipelines, optimize big data ecosystems, and partner with product, engineering, and data science teams to deliver impactful business insights. Responsibilities: • Design, develop, and support data applications and platforms with a focus on Big Data/Hadoop, Python/Spark, and related technologies. • Partner with leadership to conceptualize next-generation products, contribute to the technical architecture of the data platform, resolve production issues, and drive continuous process improvements. • Collaborate closely with product management, business stakeholders, engineers, analysts, and data scientists. • Deliver end to end ownership of components, from inception through production release. • Recommend and implement software engineering best practices with enterprise-wide impact. • Ensure quality, security, maintainability, and cost-effectiveness of delivered solutions. • Contribute expertise across the full software development lifecycle (coding, testing, deployment) and mentor peers on best practices. • Stay current with emerging technologies and adapt quickly to new tools and approaches. Qualifications: • Bachelor’s degree in Computer Science, Software Engineering, or related field. • At least 7 years of experience designing, developing, and maintaining Big Data platforms including Data Lakes, Operational Data Marts, and Analytics Data Warehouses. • Deep expertise in Spark, Hadoop/MR, Hive, Kafka, and distributed data ecosystems. • Hands-on experience building and managing data ingestion, validation, transformation, and consumption pipelines using tools such as Hive, Spark, EMR, Glue ETL/Catalog, and Snowflake. • Strong background in ETL development with tools like Cloudera/Hadoop, Nifi, and Spark. • Proficiency with SQL databases (PostgreSQL, MySQL/MariaDB). • Solid understanding of AWS and Azure cloud infrastructure, distributed systems, and reliability engineering practices. • Experience with infrastructure-as-code and CI/CD tools, including Terraform, Jenkins, Kubernetes, and Docker. • Familiarity with APIs and integration patterns. • Strong programming skills in Python and shell scripting.