Cloud Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Engineer in NYC, NY, with a 13-month contract at up to $82/hr on W2. Requires 4+ years in big data, Python, SQL, AWS, and data engineering practices. Bachelor's in STEM is mandatory.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
656
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Data Pipeline #Jenkins #Datasets #DynamoDB #PostgreSQL #Python #Data Architecture #Metadata #Monitoring #Spark (Apache Spark) #Data Wrangling #Snowflake #Data Engineering #SQL (Structured Query Language) #PySpark #Lambda (AWS Lambda) #Big Data #Cloud #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #Athena #Batch #Data Catalog #Delta Lake #BI (Business Intelligence) #MySQL #Logging #Redshift #AWS (Amazon Web Services) #Data Management #Data Governance #SageMaker #Data Transformations #ML (Machine Learning) #Databricks #Data Lineage
Role description
City: NYC, NY Onsite/ Hybrid/ Remote: Hybrid Duration: 13 months Rate Range: Up to$82/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Must Have: β€’ 4+ years of experience in big data and/or data intensive projects β€’ 4+ years of hands-on Python development β€’ Expert-level SQL development (Redshift, PostgreSQL, MySQL, etc.) β€’ Strong experience with PySpark β€’ Experience with AWS services: Redshift, S3, DynamoDB, SageMaker, Athena, Lambda β€’ Experience with Databricks, Snowflake, Jenkins β€’ Strong knowledge of data engineering practices: data pipelines, ETL, data governance, metadata management, data lineage β€’ Experience with APIs, data wrangling, and advanced data transformations Responsibilities: β€’ Design, build, and maintain batch and real-time data pipelines integrating 1st, 2nd, and 3rd party data sources β€’ Lead the design and evolution of BI Delta Lake infrastructure to support analytics and reporting β€’ Develop data catalogs, validation routines, error logging, and monitoring solutions for high-quality datasets β€’ Build integrations with marketing, media, and subscription platforms to optimize KPIs β€’ Partner with Data Architect to enable attribution, segmentation, and activation capabilities across business teams β€’ Collaborate with product, lifecycle, and marketing teams to democratize insights and improve engagement through data-driven solutions β€’ Coach engineers and BI team members on best practices for building large-scale and governed data platforms Qualifications: β€’ Bachelor’s degree in STEM field (required) β€’ Excellent communicator and collaborator, able to connect technical solutions with business outcomes β€’ Demonstrated ability to deliver solutions under evolving data conditions β€’ Strong problem-solving and analytical skills with intellectual curiosity β€’ Preferred: Experience with marketing technology stacks, CDPs (mParticle, Hightouch), ML platform integrations, experimentation frameworks, and front-end/full-stack development β€’ Familiarity with binary serialization formats (Parquet, Avro, Thrift) a plus.