

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," located in West Chester, PA, onsite. Key skills include AWS, PySpark, SQL, and 5+ years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
West Chester, PA
-
π§ - Skills detailed
#Scala #Data Engineering #SQL (Structured Query Language) #Data Pipeline #Java #Data Bricks #AWS (Amazon Web Services) #Cloud #Security #Data Processing #Data Governance #DevOps #Python #Documentation #PySpark #Data Framework #Data Quality #Spark (Apache Spark)
Role description
Data engineer -
West Chester - PA - ONSITE ( 5 days a week)
β’ Data bricks
β’ AWS
β’ Pyspark
β’ SQL
Writing clean, efficient, and maintainable code for data pipelines, data processing tasks, and data quality checks.Implementing unit tests and integration tests to ensure code quality and reliability.
β’ Troubleshooting and resolving technical issues related to the data framework.
β’ Staying abreast of the latest advancements in data engineering technologies and best practices.
To be successful in this role, you will need:
β’ 5+ years of experience in data engineering or a related field.
β’ +2 years Framework experience
β’ Proven experience in designing and developing scalable data frameworks using Python, Java, Spark, and Scala.
β’ Excellent communication and collaboration skills.
β’ A passion for data engineering and a commitment to continuous learning.
β’ Experience in code review and documentation is a plus.
β’ Experience with cloud platforms like AWS.
β’ Familiarity with data governance and security principles.
Experience with DevOps practices and tools
Data engineer -
West Chester - PA - ONSITE ( 5 days a week)
β’ Data bricks
β’ AWS
β’ Pyspark
β’ SQL
Writing clean, efficient, and maintainable code for data pipelines, data processing tasks, and data quality checks.Implementing unit tests and integration tests to ensure code quality and reliability.
β’ Troubleshooting and resolving technical issues related to the data framework.
β’ Staying abreast of the latest advancements in data engineering technologies and best practices.
To be successful in this role, you will need:
β’ 5+ years of experience in data engineering or a related field.
β’ +2 years Framework experience
β’ Proven experience in designing and developing scalable data frameworks using Python, Java, Spark, and Scala.
β’ Excellent communication and collaboration skills.
β’ A passion for data engineering and a commitment to continuous learning.
β’ Experience in code review and documentation is a plus.
β’ Experience with cloud platforms like AWS.
β’ Familiarity with data governance and security principles.
Experience with DevOps practices and tools