Data Engineer (AI Solutions/Gen AI) W2 Candidate Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (AI Solutions) on a remote contract basis, starting ASAP, with a pay rate of "unknown." Key requirements include 6+ years of software development, proficiency in Python or Java, and experience with GCP and ETL.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 17, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Project Management #Data Integrity #MongoDB #GIT #Data Pipeline #ML (Machine Learning) #Data Engineering #Cloud #Scala #"ETL (Extract #Transform #Load)" #PostgreSQL #Azure #Version Control #Data Science #Redis #Databricks #AI (Artificial Intelligence) #MySQL #GCP (Google Cloud Platform) #Data Processing #Java #Python #AWS (Amazon Web Services)
Role description
Role : Data Engineer (AI Solutions) Location: Remote Type: Contract Start Date: ASAP Client : Recruit22 Remotely, Anywhere Sphere partners with Clients to transform their organizations, embed technology and process into everything they do, and enable lasting competitive advantage. We combine global expertise and local insight to help people and companies turn their ambitious goals into reality. At Sphere we put people first and strive to be a changemaker by building a better future through innovation and technology. Sphere is helping a known multinational company to innovate and bring new platforms to market and is looking for a Data Engineer who will join our team. Responsibilities: β€’ Data Pipeline Development: Build, maintain, and optimize robust data pipelines to ensure efficient data flow and accurate transformation across various systems. Manage and implement transformational rules to support data integrity and business requirements. β€’ Technical Implementation: Develop and deploy data solutions using Python and Java, leveraging Databricks and Databricks Notebooks to handle large-scale data processing and analytics tasks. Ensure code quality and scalability in all development activities. β€’ Project Management: Deliver projects within established timelines, managing multiple assignments simultaneously. Prioritize tasks effectively to meet deadlines without compromising quality. β€’ Collaboration and Communication: Work closely with data scientists, analysts, and other engineering teams to support data-driven initiatives. Communicate project progress, challenges, and solutions clearly with clients and team members to ensure alignment and transparency. β€’ Problem-Solving and Innovation: Analyze project requirements to develop innovative solutions independently. Address technical challenges proactively, ensuring seamless project execution with minimal oversight. β€’ Cloud Platform Utilization: Utilize cloud platforms such as GCP, AWS or Azure to design, implement, and manage data infrastructure. Leverage cloud services to enhance data processing capabilities and support scalable solutions. Requirements: β€’ Experience building and managing data pipelines. β€’ 6+ years of software development with a focus on data engineering. β€’ Proficiency in Python or Java. β€’ Experience with GCP. β€’ Experience with ETL. β€’ Knowledge of MySQL, PostgreSQL, MongoDB, or Redis. β€’ Familiarity with version control systems like Git. β€’ Experience with Databricks and Databricks Notebooks. β€’ Experience with AI, generative AI and ML.