

Dexian
Data Engineer #992241
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in New York, NY, on a 12-month contract at a pay rate of "unknown." Candidates should have 12+ years of experience, strong SQL, Python, and PySpark skills, and familiarity with cloud platforms and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Security #Storage #Data Architecture #Snowflake #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #BigQuery #GCP (Google Cloud Platform) #Compliance #Infrastructure as Code (IaC) #Data Pipeline #Data Processing #Scala #Code Reviews #Azure #Databricks #Data Engineering #Batch #Data Modeling #GIT #GitHub #Data Governance #GitLab #SQL (Structured Query Language) #Data Quality #Redshift #Terraform #Logging #Observability #Python #Monitoring #PySpark #AWS (Amazon Web Services) #Cloud
Role description
Job Title: Senior Data Engineer / Lead Data Engineer
Location: New York, NY
Duration: 12 month contract to start
Job Summary
We are seeking an experienced Senior Data Engineer to design, build, and maintain scalable data platforms that support both batch and real-time analytics. This role will play a key part in defining data architecture, ensuring data quality and reliability, and driving best practices across data engineering teams. The ideal candidate is hands-on, technically strong, and comfortable leading architecture discussions and mentoring engineers.
Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines for batch and streaming workloads
• Build and optimize data models to support analytics, reporting, and downstream applications
• Ensure data quality, reliability, and observability across data pipelines
• Optimize data processing performance and manage storage and compute costs
• Lead architecture discussions, participate in code reviews, and mentor junior and offshore engineers
• Implement and enforce security, privacy, and compliance standards across data platforms
• Automate and improve SDLC and CI/CD workflows for data engineering teams
• Troubleshoot and resolve complex data pipeline, performance, and reliability issues
• Build and maintain Infrastructure-as-Code (IaC) using Terraform or similar tools
• Implement or enhance monitoring, alerting, and logging for data systems and pipelines
Qualifications
• 12+ years of experience in data engineering or related roles
• Strong proficiency in SQL, Python, and PySpark
• Hands-on experience with modern data platforms such as Spark, Databricks, Snowflake, BigQuery, or Redshift
• Experience working in cloud environments (AWS, Azure, and/or GCP)
• Solid understanding of data modeling, ETL/ELT patterns, and data governance
• Strong experience with Git-based CI/CD pipelines (GitHub, GitLab, or similar)
• Excellent analytical and problem-solving skills with a focus on performance optimization
• Experience with Terraform or other Infrastructure-as-Code frameworks is a plus
• Familiarity with observability, monitoring, and alerting tools
Job Title: Senior Data Engineer / Lead Data Engineer
Location: New York, NY
Duration: 12 month contract to start
Job Summary
We are seeking an experienced Senior Data Engineer to design, build, and maintain scalable data platforms that support both batch and real-time analytics. This role will play a key part in defining data architecture, ensuring data quality and reliability, and driving best practices across data engineering teams. The ideal candidate is hands-on, technically strong, and comfortable leading architecture discussions and mentoring engineers.
Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines for batch and streaming workloads
• Build and optimize data models to support analytics, reporting, and downstream applications
• Ensure data quality, reliability, and observability across data pipelines
• Optimize data processing performance and manage storage and compute costs
• Lead architecture discussions, participate in code reviews, and mentor junior and offshore engineers
• Implement and enforce security, privacy, and compliance standards across data platforms
• Automate and improve SDLC and CI/CD workflows for data engineering teams
• Troubleshoot and resolve complex data pipeline, performance, and reliability issues
• Build and maintain Infrastructure-as-Code (IaC) using Terraform or similar tools
• Implement or enhance monitoring, alerting, and logging for data systems and pipelines
Qualifications
• 12+ years of experience in data engineering or related roles
• Strong proficiency in SQL, Python, and PySpark
• Hands-on experience with modern data platforms such as Spark, Databricks, Snowflake, BigQuery, or Redshift
• Experience working in cloud environments (AWS, Azure, and/or GCP)
• Solid understanding of data modeling, ETL/ELT patterns, and data governance
• Strong experience with Git-based CI/CD pipelines (GitHub, GitLab, or similar)
• Excellent analytical and problem-solving skills with a focus on performance optimization
• Experience with Terraform or other Infrastructure-as-Code frameworks is a plus
• Familiarity with observability, monitoring, and alerting tools






