Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with 5-8 years of experience, focusing on MySQL, Python, and Big Data technologies. Key skills include ETL design, data analysis, Spark, and experience with Kafka and Amazon services. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 20, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Philadelphia, PA
🧠 - Skills detailed
#Amazon Redshift #Data Quality #Tableau #SQL (Structured Query Language) #RDS (Amazon Relational Database Service) #Kafka (Apache Kafka) #Visualization #Data Engineering #Big Data #MySQL #Amazon RDS (Amazon Relational Database Service) #Splunk #Python #Data Analysis #Data Processing #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Redshift #GCP (Google Cloud Platform)
Role description
Must Have: MySQL, BigData, Data Analysis, Python Big Data - Data Processing|Spark Detailed Job Description • 5+ years of experience in Python • 5+ years of experience in custom ETL design, implementation, and maintenance • 5+ Experience in Database technologies, SQL, MySQL, Elastic search, Big Query, ETLs and framework development. • 3+ Experience working on Big Data • 5+ Experience with data analysis and visualization, particularly Tableau, Splunk • Designing and implementing real-time pipelines. • Excellent analytical and problem-solving skills required • Experience with SQL performance tuning and e2e process optimization • Experience processing large data sets • Strong experience with Kafka, Amazon RDS, Amazon Redshift, GCP • Experience with data quality and validation Minimum years of experience 5-8 years