

Drillo.AI
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Berkeley Heights, New Jersey, offering a contract lasting over 6 months. It requires 5+ years of experience, strong Python and Snowflake skills, and familiarity with Azure or AWS. Only US Citizens, Green Card holders, or EADs may apply.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
October 17, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Berkeley Heights, NJ
-
π§ - Skills detailed
#Azure Data Factory #Databricks #Programming #PySpark #Security #Big Data #Data Engineering #Data Modeling #Automation #Azure #Airflow #Hadoop #Distributed Computing #Scala #SQL (Structured Query Language) #Data Quality #Data Science #Spark (Apache Spark) #Data Pipeline #Snowflake #Python #ADF (Azure Data Factory) #Cloud #AWS (Amazon Web Services) #Data Transformations #"ETL (Extract #Transform #Load)" #Data Governance #Data Processing
Role description
Senior Data Engineer
Location : Berkeley Heights, New Jersey
Employment Type: Contract/Fulltime
Note : This role is only for US Citizens , Green Card , EAD
Role Overview: We are seeking a highly skilled Senior Data Engineer with deep expertise in Python programming and Snowflake architecture, including Snowflake administration. The ideal candidate will have a strong background in Big Data technologies, distributed computing, and cloud platforms (Azure or AWS), and will be instrumental in designing, building, and optimizing scalable data pipelines and analytics solutions.
Key Responsibilities:
β’ Design and implement robust data pipelines using PySpark/Spark and SQL
β’ Architect and administer Snowflake environments, including performance tuning, security, and data governance
β’ Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions
β’ Optimize data workflows for performance and scalability across Databricks, Azure, or AWS
β’ Implement best practices for data modeling, ETL/ELT processes, and CI/CD in data engineering
β’ Monitor and troubleshoot data infrastructure and ensure data quality and reliability
Required Skills & Experience:
β’ 5+ years of experience in Data Engineering roles
β’ Strong proficiency in Python for data processing and automation
β’ Hands-on expertise in Snowflake: architecture, administration, and performance tuning
β’ Experience with Big Data ecosystems: Spark, PySpark, Hadoop
β’ Proficiency in SQL for complex data transformations and analytics
β’ Familiarity with Databricks and orchestration tools (e.g., Airflow, Azure Data Factory)
β’ Cloud experience with Azure and/or AWS
Senior Data Engineer
Location : Berkeley Heights, New Jersey
Employment Type: Contract/Fulltime
Note : This role is only for US Citizens , Green Card , EAD
Role Overview: We are seeking a highly skilled Senior Data Engineer with deep expertise in Python programming and Snowflake architecture, including Snowflake administration. The ideal candidate will have a strong background in Big Data technologies, distributed computing, and cloud platforms (Azure or AWS), and will be instrumental in designing, building, and optimizing scalable data pipelines and analytics solutions.
Key Responsibilities:
β’ Design and implement robust data pipelines using PySpark/Spark and SQL
β’ Architect and administer Snowflake environments, including performance tuning, security, and data governance
β’ Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions
β’ Optimize data workflows for performance and scalability across Databricks, Azure, or AWS
β’ Implement best practices for data modeling, ETL/ELT processes, and CI/CD in data engineering
β’ Monitor and troubleshoot data infrastructure and ensure data quality and reliability
Required Skills & Experience:
β’ 5+ years of experience in Data Engineering roles
β’ Strong proficiency in Python for data processing and automation
β’ Hands-on expertise in Snowflake: architecture, administration, and performance tuning
β’ Experience with Big Data ecosystems: Spark, PySpark, Hadoop
β’ Proficiency in SQL for complex data transformations and analytics
β’ Familiarity with Databricks and orchestration tools (e.g., Airflow, Azure Data Factory)
β’ Cloud experience with Azure and/or AWS