Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Data Engineering, proficient in Python, Java, or Scala, SQL, and AWS tools. Contract length is unspecified, with a remote work location and a competitive pay rate.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date discovered
September 13, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Orlando, FL
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #SQL (Structured Query Language) #Apache Airflow #Data Modeling #Datasets #Scala #Batch #Data Engineering #PySpark #PostgreSQL #Spark (Apache Spark) #Python #Programming #SQL Server #Agile #Athena #DynamoDB #AWS (Amazon Web Services) #Data Lake #Airflow #Scrum #Data Management #S3 (Amazon Simple Storage Service) #Redshift #Java #Data Pipeline
Role description
100% Remote - Contract Data Engineer β€’ 5+ years of applied experience in Data Engineering, including but not limited to building Data Pipelines, Orchestration, Data Modeling & Data Lake. β€’ Programming skills in one or more of the following: Python, Java, Scala, plus SQL and experience in writing reusable/efficient code to automate analysis and data processes. β€’ Experience in Data Warehousing (Relational and dimensional data modeling). β€’ Experience in SQL Server and PostgreSQL β€’ Strong working experience with a variety of data sources such as APIs, real-time feeds, structured and semi-structured file formats. β€’ Experience with processing large datasets and building code using Glue, Apache Airflow / Amazon MWAA, SQL, Python, and pySpark. β€’ Experience of near Real Time & Batch Data Pipeline development. β€’ Experience with AWS data management (S3, Redshift, DynamoDB) and related tools (Athena, EMR, Glue, Lambda). β€’ Experience working in an agile/scrum environment.