

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Data Engineering, proficient in Python, Java, or Scala, SQL, and AWS tools. Contract length is unspecified, with a remote work location and a competitive pay rate.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date discovered
September 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Orlando, FL
-
π§ - Skills detailed
#Lambda (AWS Lambda) #SQL (Structured Query Language) #Apache Airflow #Data Modeling #Datasets #Scala #Batch #Data Engineering #PySpark #PostgreSQL #Spark (Apache Spark) #Python #Programming #SQL Server #Agile #Athena #DynamoDB #AWS (Amazon Web Services) #Data Lake #Airflow #Scrum #Data Management #S3 (Amazon Simple Storage Service) #Redshift #Java #Data Pipeline
Role description
100% Remote - Contract Data Engineer
β’ 5+ years of applied experience in Data Engineering, including but not limited to building Data Pipelines, Orchestration, Data Modeling & Data Lake.
β’ Programming skills in one or more of the following: Python, Java, Scala, plus SQL and experience in writing reusable/efficient code to automate analysis and data processes.
β’ Experience in Data Warehousing (Relational and dimensional data modeling).
β’ Experience in SQL Server and PostgreSQL
β’ Strong working experience with a variety of data sources such as APIs, real-time feeds, structured and semi-structured file formats.
β’ Experience with processing large datasets and building code using Glue, Apache Airflow / Amazon MWAA, SQL, Python, and pySpark.
β’ Experience of near Real Time & Batch Data Pipeline development.
β’ Experience with AWS data management (S3, Redshift, DynamoDB) and related tools (Athena, EMR, Glue, Lambda).
β’ Experience working in an agile/scrum environment.
100% Remote - Contract Data Engineer
β’ 5+ years of applied experience in Data Engineering, including but not limited to building Data Pipelines, Orchestration, Data Modeling & Data Lake.
β’ Programming skills in one or more of the following: Python, Java, Scala, plus SQL and experience in writing reusable/efficient code to automate analysis and data processes.
β’ Experience in Data Warehousing (Relational and dimensional data modeling).
β’ Experience in SQL Server and PostgreSQL
β’ Strong working experience with a variety of data sources such as APIs, real-time feeds, structured and semi-structured file formats.
β’ Experience with processing large datasets and building code using Glue, Apache Airflow / Amazon MWAA, SQL, Python, and pySpark.
β’ Experience of near Real Time & Batch Data Pipeline development.
β’ Experience with AWS data management (S3, Redshift, DynamoDB) and related tools (Athena, EMR, Glue, Lambda).
β’ Experience working in an agile/scrum environment.