Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Manhattan, NY, offering $70/hour for a contract exceeding 6 months. Requires 10+ years of experience in big data solutions, proficiency in Spark, Scala, AWS, SQL (Redshift), and advanced ETL/ELT skills.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 27, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
1099 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, United States
-
🧠 - Skills detailed
#Deployment #Leadership #"ETL (Extract #Transform #Load)" #GIT #AWS (Amazon Web Services) #Version Control #Spark (Apache Spark) #Snowflake #Data Architecture #Cloud #Lambda (AWS Lambda) #Scala #Big Data #SQL (Structured Query Language) #Scripting #Python #Quality Assurance #SQS (Simple Queue Service) #Data Engineering #Documentation #AWS Glue #SNS (Simple Notification Service) #Monitoring #Redshift
Role description
Data Engineer - (Onsite) Job Title: Data Engineer Location: Manhattan, NY (Five days onsite, full-time) Rate: $70 an hour C2C or 1099 Overview Our client is seeking a highly experienced Data Engineer to drive technical execution for enterprise-scale data engineering solutions. The successful candidate will bring deep expertise in big data architecture, hands-on development, and team leadership, with a strong commitment to onsite collaboration at their New York City office. The Position This role empowers the engineer to guide end-to-end technical design, solution development, and quality assurance across mission-critical projects. As the technical lead, the candidate will coordinate with business stakeholders, lead design reviews, and oversee the delivery of robust big data solutions in a fast-paced, collaborative environment. The position requires high proficiency in Spark, Scala, AWS services, and enterprise data platforms. Your Profile β€’ Over 10 years of experience architecting, building, and deploying enterprise big data solutions β€’ Proven hands-on development expertise with Spark, Scala, Python, AWS Glue, Lambda, SNS/SQS, and CloudWatch β€’ Deep SQL skills with preference for Redshift; also experienced in Snowflake data warehousing β€’ Advanced knowledge of ETL/ELT processes, frameworks, and workflow optimization β€’ Skilled at technical documentation, including integration and application designs β€’ Track record of effective communication with management and business stakeholders β€’ Familiar with version control (Git), CI/CD pipelines, and production support best practices β€’ Strong commitment to onsite team culture and knowledge sharing What You’ll Do β€’ Lead technical design and development of scalable big data solutions β€’ Create and review integration/application documentation and participate in peer design reviews β€’ Develop, configure, test, and validate solution components through unit, performance, and standards-based testing β€’ Perform advanced code review and enforce technical best practices throughout development lifecycle β€’ Troubleshoot and resolve complex production issues; tune and optimize environments for robust operation β€’ Support deployment processes and production stability through effective use of Git, CI/CD, and monitoring tools β€’ Uphold rigorous technical standards and champion continuous improvement across project phases Qualifications β€’ Minimum 10 years enterprise data engineering experience, with a focus on Scala, Spark, and AWS technologies β€’ Strong hands-on skills with SQL (preferably Redshift) and Snowflake data platforms β€’ Advanced Python development and scripting capabilities β€’ Demonstrated success in designing and scaling ETL/ELT solutions β€’ Deep understanding of modern data architecture, integration, and performance optimization β€’ Proven ability to lead technical reviews, mentor peers, and deliver under tight deadlines β€’ Commitment to five-day onsite work model in New York