

Skysoft Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 8–12+ years of experience, based in NYC for 6 months onsite. Key skills include AWS services, Python/Java, SQL, Apache Spark, and experience with Snowflake and container orchestration. USC & GC only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
April 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Big Data #Data Pipeline #Data Quality #AWS (Amazon Web Services) #Data Engineering #Data Modeling #Data Lake #Data Governance #Snowflake #HBase #GitHub #Agile #Terraform #GIT #Lambda (AWS Lambda) #Scala #SQL (Structured Query Language) #TigerGraph #Infrastructure as Code (IaC) #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Apache Spark #Apache Airflow #Scrum #Airflow #Kubernetes #Python #Security #Storage #Databases #Java #Cloud #Batch #Data Science #Version Control #DevOps #Spark (Apache Spark) #Data Warehouse #Data Processing #Graph Databases #Kafka (Apache Kafka) #Apache Kafka #SQL Queries #Datasets
Role description
Job Title
Senior Data Engineer (AWS / Big Data / Streaming)
Location
NYC local - 5 days onsite
USC & GC, GC-EAD only
Duration
6 months + Extension
Job Description/ Roles & Responsibilities
Key Responsibilities
· Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, Step Functions, and EventBridge
· Build and manage data lakes and storage solutions using Amazon S3
· Develop and optimize real-time and batch data processing pipelines using Kinesis, Kafka, and Spark
· Implement workflow orchestration using Apache Airflow and AWS Step Functions
· Work with containerized environments using ECS, EKS, and Kubernetes
· Design and maintain data models for analytics and reporting in Snowflake and other data warehouses
· Develop and optimize SQL queries for large-scale datasets
· Build and maintain graph-based data solutions using TigerGraph and other graph databases
· Monitor system performance and troubleshoot issues using CloudWatch
· Collaborate with cross-functional teams including Data Scientists, Analysts, and DevOps engineers
· Automate infrastructure provisioning using Terraform (Infrastructure as Code)
· Ensure data quality, governance, and security best practices
· Utilize Python and Java for data processing and backend development
· Leverage tools like GitHub Copilot to enhance development productivity
Required Skills & Qualifications
· Strong hands-on experience with AWS services: Glue, Lambda, S3, EventBridge, Step Functions, Kinesis, CloudWatch
· Experience with containerization & orchestration: Kubernetes, ECS, EKS
· Proficiency in Python and/or Java
· Strong experience with Apache Spark for distributed data processing
· Expertise in SQL and data modeling
· Hands-on experience with Snowflake or similar data warehouse
· Experience with streaming platforms like Apache Kafka and Kinesis
· Experience with workflow orchestration tools such as Apache Airflow
· Knowledge of Infrastructure as Code (Terraform)
· Experience with graph databases (e.g., TigerGraph)
· Understanding of CI/CD pipelines and version control (Git)
Preferred Qualifications
· Experience building real-time analytics systems
· Knowledge of data governance and security frameworks
· Exposure to machine learning pipelines is a plus
· AWS certifications (e.g., AWS Certified Data Analytics or Solutions Architect)
· Experience working in Agile/Scrum environments
Experience Required
· 8–12+ years of experience in Data Engineering / Big Data / Cloud Data Platforms
Job Title
Senior Data Engineer (AWS / Big Data / Streaming)
Location
NYC local - 5 days onsite
USC & GC, GC-EAD only
Duration
6 months + Extension
Job Description/ Roles & Responsibilities
Key Responsibilities
· Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, Step Functions, and EventBridge
· Build and manage data lakes and storage solutions using Amazon S3
· Develop and optimize real-time and batch data processing pipelines using Kinesis, Kafka, and Spark
· Implement workflow orchestration using Apache Airflow and AWS Step Functions
· Work with containerized environments using ECS, EKS, and Kubernetes
· Design and maintain data models for analytics and reporting in Snowflake and other data warehouses
· Develop and optimize SQL queries for large-scale datasets
· Build and maintain graph-based data solutions using TigerGraph and other graph databases
· Monitor system performance and troubleshoot issues using CloudWatch
· Collaborate with cross-functional teams including Data Scientists, Analysts, and DevOps engineers
· Automate infrastructure provisioning using Terraform (Infrastructure as Code)
· Ensure data quality, governance, and security best practices
· Utilize Python and Java for data processing and backend development
· Leverage tools like GitHub Copilot to enhance development productivity
Required Skills & Qualifications
· Strong hands-on experience with AWS services: Glue, Lambda, S3, EventBridge, Step Functions, Kinesis, CloudWatch
· Experience with containerization & orchestration: Kubernetes, ECS, EKS
· Proficiency in Python and/or Java
· Strong experience with Apache Spark for distributed data processing
· Expertise in SQL and data modeling
· Hands-on experience with Snowflake or similar data warehouse
· Experience with streaming platforms like Apache Kafka and Kinesis
· Experience with workflow orchestration tools such as Apache Airflow
· Knowledge of Infrastructure as Code (Terraform)
· Experience with graph databases (e.g., TigerGraph)
· Understanding of CI/CD pipelines and version control (Git)
Preferred Qualifications
· Experience building real-time analytics systems
· Knowledge of data governance and security frameworks
· Exposure to machine learning pipelines is a plus
· AWS certifications (e.g., AWS Certified Data Analytics or Solutions Architect)
· Experience working in Agile/Scrum environments
Experience Required
· 8–12+ years of experience in Data Engineering / Big Data / Cloud Data Platforms






