Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills required include proficiency in Azure Databricks, Synapse Analytics, SQL Hyperscale, and strong programming in Python and PySpark.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Python #Automated Testing #Data Modeling #Java #Programming #Azure #Data Bricks #Spark (Apache Spark) #Azure cloud #SQL (Structured Query Language) #Automation #Azure Databricks #Data Quality #Databricks #Version Control #Quality Assurance #Data Engineering #"ETL (Extract #Transform #Load)" #Synapse #PySpark #Cloud
Role description
Job Focused On: β€’ Python and Java. Used to be a data bricks engineer. And now move into data quality. β€’ Resources who were data engineers and now moved into data quality. β€’ Write code/queries β€’ Develop automation test scripts using Python and PySpark. β€’ Azure Databricks, Synapse Analytics, and SQL Hyperscale. Requirements: β€’ Proficiency in data engineering technologies such as Azure Databricks, Synapse Analytics, and SQL Hyperscale. β€’ Strong programming skills in Python and PySpark for developing automation test scripts. β€’ Experience with building and implementing test frameworks for data-centric applications in Azure cloud environments. β€’ Knowledge of data quality assurance methodologies and best practices. β€’ Familiarity with version control systems and CI/CD pipelines for automated testing and deployment. β€’ Understanding of data modeling, ETL processes, and data warehousing concepts.