BCforward

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in New York, NY, on a W2 contract for an unspecified duration, offering competitive pay. Key skills include ETL development, Snowflake, PySpark, and AWS. Experience with data lakes and workflow automation is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Snowflake #PySpark #"ETL (Extract #Transform #Load)" #Python #Cloud #AWS (Amazon Web Services) #Data Analysis #Datasets #S3 (Amazon Simple Storage Service) #Data Engineering #AWS S3 (Amazon Simple Storage Service) #Java #Programming #Spark (Apache Spark) #Consulting #Airflow #Data Lake #Data Modeling #Data Ingestion #Data Pipeline #Scala #Automation #Data Quality #Data Processing
Role description
About BCforward BCforward is a leading global IT consulting and workforce solutions firm providing services and support to Fortune 500 and government clients. Founded in 1998, BCforward has grown with our customers needs into a full-service business solutions provider. With delivery centers and offices across North America and India, we take pride in building long-term relationships and delivering excellence through innovation, collaboration, and integrity. Presently we are unable to sponsor and request applicants to apply who are authorized to work without sponsorship • Employment TYPE: Contract W2 - NO C2C Job Title: Data Engineer Location: New York, NY – 5 Days Onsite Job Summary: We are seeking an Intermediate Data Engineer to design, develop, and maintain scalable data pipelines supporting our data lake and Snowflake environments. The ideal candidate will have strong experience in ETL development, data processing, and workflow automation, enabling efficient data analysis and aggregation. Key Responsibilities: • Design, build, and manage ETL pipelines for data ingestion into data lake and Snowflake. • Develop scalable data processing solutions using PySpark and Spark (Java/Python). • Automate data workflows and scheduling using tools like Airflow and Control-M. • Perform data transformation, aggregation, and optimization for analytics use cases. • Collaborate with cross-functional teams to support data-driven decision-making. • Ensure data quality, integrity, and performance across large datasets. • Monitor and troubleshoot data pipelines and workflows. Required Skills & Qualifications: • Hands-on experience with Data Lake architectures and AWS services. • Strong experience in ETL pipeline development and data processing. • Proficiency in Snowflake for data warehousing. • Experience with workflow orchestration tools like Airflow and/or Control-M. • Strong programming skills in PySpark and/or Java Spark. • Understanding of data modeling, transformation, and aggregation techniques. • Familiarity with large-scale data environments and distributed systems. Preferred Qualifications: • Experience with cloud-native data solutions on AWS (S3, Glue, EMR, etc.). • Exposure to CI/CD and automation in data engineering workflows. • Knowledge of performance tuning and optimization in Snowflake/Spark. Why BCforward? At BCforward, we believe in advancing lives and careers. When you join our team, you gain access to: • Competitive compensation and benefits • Opportunities for growth with global clients • A supportive, inclusive culture that values innovation and people • Exposure to cutting-edge technologies and projects About Our Commitment BCforward is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, or veteran status.