Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract basis, requiring 8+ years of experience. Located in NYC/NC, it demands expertise in Databricks or Snowflake, SQL, Python, and cloud platforms. Strong data governance and ETL knowledge are essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, United States
-
🧠 - Skills detailed
#DevOps #Storage #Data Engineering #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Data Modeling #Data Quality #Data Science #SQL (Structured Query Language) #AWS (Amazon Web Services) #Python #Data Processing #Databricks #Cloud #Snowflake #Security #Agile #Spark (Apache Spark) #Kafka (Apache Kafka) #Azure #Compliance #Scala #Data Pipeline #Data Governance #Apache Spark
Role description
Job Title: Senior Data Engineer Experience Required: 8+ Years Location: NYC/ NC Job Type: [Contract] Role Overview We are seeking an experienced Senior Data Engineer with strong expertise in modern data platforms such as Databricks or Snowflake. The candidate will be responsible for designing, developing, and optimizing scalable data pipelines, ensuring data quality, and enabling advanced analytics solutions. Key Responsibilities β€’ Design, build, and maintain large-scale data pipelines and ETL processes. β€’ Work extensively on Databricks or Snowflake platforms for data processing, transformation, and storage. β€’ Collaborate with data scientists, analysts, and business teams to deliver high-quality data solutions. β€’ Implement best practices in data modeling, performance optimization, and query tuning. β€’ Ensure data governance, security, and compliance within all data solutions. β€’ Troubleshoot complex data issues and provide root cause analysis. Required Skills & Experience β€’ 8+ years of professional experience in Data Engineering. β€’ Strong hands-on expertise in Databricks or Snowflake (at least one is mandatory, both preferred). β€’ Proficiency in SQL, Python, or Scala for data processing. β€’ Solid understanding of data warehousing concepts and ETL frameworks. β€’ Experience with cloud platforms (AWS, Azure, or GCP). β€’ Knowledge of data governance, security, and performance optimization. Good to Have β€’ Experience with Apache Spark. β€’ Familiarity with streaming technologies (Kafka, Kinesis, etc.). β€’ Exposure to CI/CD pipelines and DevOps practices for data solutions. β€’ Experience working in Agile environments.