

Senior Data Engineer (PySpark, Python, Databricks) ($60/hr)
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#SQL (Structured Query Language) #Azure #Data Lake #Data Lakehouse #Apache Iceberg #Snowflake #Data Warehouse #Spark (Apache Spark) #Scala #Azure Data Factory #Data Modeling #Delta Lake #Python #Databricks #Azure cloud #Cloud #Data Quality #Data Pipeline #"ETL (Extract #Transform #Load)" #Version Control #PySpark #Data Architecture #Computer Science #ADF (Azure Data Factory) #Data Engineering #DevOps
Role description
We are seeking a talented Senior Data Engineer to join our data engineering team and help design, build, and optimize scalable data solutions. The ideal candidate will have strong expertise in Azure cloud services, Databricks, PySpark, and SQL, along with experience in data modeling and modern data architectures. This role will play a key part in building config-driven data pipelines, enabling analytics, and supporting enterprise-wide data initiatives.
Key Responsibilities
β’ Design, develop, and maintain data pipelines using Azure Data Factory, Databricks Workflows, PySpark, and Python.
β’ Build and optimize SQL-based transformations to support enterprise data solutions.
β’ Implement config-driven pipelines to improve scalability, reusability, and maintainability.
β’ Develop and maintain data models, including Star Schema and Snowflake Schema.
β’ Work with open table formats such as Delta Lake and Apache Iceberg.
β’ Partner with stakeholders to understand requirements and translate them into scalable technical solutions.
β’ Leverage Azure cloud technologies (preferred) to build modern, cloud-native data architectures.
β’ Collaborate with cross-functional teams to ensure data quality, consistency, and performance.
Must-Have Skills
β’ Hands-on experience with Azure Data Factory, Databricks, PySpark, and Python.
β’ Strong SQL expertise in building queries, transformations, and optimizations.
β’ Solid understanding of data modeling (Star Schema, Snowflake Schema).
β’ Knowledge of Delta Lake and/or Apache Iceberg for data lakehouse solutions.
β’ Familiarity with cloud data platforms (Azure preferred).
Nice-to-Have Skills
β’ Experience with Snowflake or other modern cloud data warehouses.
β’ Exposure to CI/CD, version control, and DevOps practices.
Qualifications
β’ 5+ years of professional experience in data engineering (will consider 3+ years with a Computer Science degree).
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.
β’ Strong problem-solving skills and attention to detail.
β’ Excellent communication skills and ability to work in collaborative teams.
We are seeking a talented Senior Data Engineer to join our data engineering team and help design, build, and optimize scalable data solutions. The ideal candidate will have strong expertise in Azure cloud services, Databricks, PySpark, and SQL, along with experience in data modeling and modern data architectures. This role will play a key part in building config-driven data pipelines, enabling analytics, and supporting enterprise-wide data initiatives.
Key Responsibilities
β’ Design, develop, and maintain data pipelines using Azure Data Factory, Databricks Workflows, PySpark, and Python.
β’ Build and optimize SQL-based transformations to support enterprise data solutions.
β’ Implement config-driven pipelines to improve scalability, reusability, and maintainability.
β’ Develop and maintain data models, including Star Schema and Snowflake Schema.
β’ Work with open table formats such as Delta Lake and Apache Iceberg.
β’ Partner with stakeholders to understand requirements and translate them into scalable technical solutions.
β’ Leverage Azure cloud technologies (preferred) to build modern, cloud-native data architectures.
β’ Collaborate with cross-functional teams to ensure data quality, consistency, and performance.
Must-Have Skills
β’ Hands-on experience with Azure Data Factory, Databricks, PySpark, and Python.
β’ Strong SQL expertise in building queries, transformations, and optimizations.
β’ Solid understanding of data modeling (Star Schema, Snowflake Schema).
β’ Knowledge of Delta Lake and/or Apache Iceberg for data lakehouse solutions.
β’ Familiarity with cloud data platforms (Azure preferred).
Nice-to-Have Skills
β’ Experience with Snowflake or other modern cloud data warehouses.
β’ Exposure to CI/CD, version control, and DevOps practices.
Qualifications
β’ 5+ years of professional experience in data engineering (will consider 3+ years with a Computer Science degree).
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.
β’ Strong problem-solving skills and attention to detail.
β’ Excellent communication skills and ability to work in collaborative teams.