Data Engineer - Human Behavior and Carbon Neutrality

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer - Human Behavior and Carbon Neutrality for a 9-month contract, paying $79.95/hour. Requires a Bachelor's degree, 4+ years in cloud data pipelines (Databricks preferred), expertise in SQL, Python, and AWS. Remote work available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
June 5, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Los Altos, CA
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Time Series #Security #NoSQL #Python #Cloud #R #TensorFlow #AI (Artificial Intelligence) #Kubernetes #Data Integrity #Databricks #Scala #PyTorch #BI (Business Intelligence) #Docker #Deep Learning #Computer Science #Data Engineering #Data Security #Datasets #AWS (Amazon Web Services) #ML (Machine Learning) #Data Pipeline #Java #Data Quality
Role description
Expected compensation: 79.95 USD Per Hour HireArt is helping TRI hire a Data Engineer - Human Behavior and Carbon Neutrality who is interested in the overlap between technology and the behavioral sciences to architect and optimize cloud-based infrastructure to support research objectives in the Carbon Neutrality Department of the Human-Centered AI Division. In this role, you will design, develop, and maintain end-to-end data pipelines in Databricks for processing/analyzing large-scale datasets, as well as build and optimize cloud-based infrastructure to support generative AI methods, ensuring scalability, reliability, and performance. We're looking for someone with a Bachelor's degree or higher in Computer Science, Engineering, or a related field, and 4+ years of experience designing, building, and maintaining data pipelines in a cloud environment, preferably using Databricks. As a Data Engineer - Human Behavior and Carbon Neutrality, you will: β€’ Design, develop, and maintain end-to-end data pipelines in Databricks for processing/analyzing large-scale datasets. β€’ Build and optimize cloud-based infrastructure to support generative AI methods, ensuring scalability, reliability, and performance. β€’ Collaborate closely with research scientists, software developers, machine learning engineers, and domain experts to understand data requirements and design effective solutions. β€’ Implement best practices for data quality, data integrity, and data security throughout the pipeline. β€’ Incorporate multi-model data (e.g., vision, language, and time series) techniques into data pipelines to enable AI model development. β€’ Monitor and troubleshoot data pipelines, identifying/resolving performance bottlenecks, and optimizing resource utilization. β€’ Stay up-to-date with the latest advancements in cloud technologies, generative AI frameworks, and data engineering techniques. Requirements β€’ Bachelor's or Master's degree in Computer Science, Engineering, or a related field β€’ 4+ years of experience designing, building, and maintaining data pipelines in a cloud environment, preferably using Databricks β€’ Strong proficiency in cloud technologies, especially AWS β€’ A track record of working with human behavior data and the ability to derive actionable insights from such data is a plus β€’ Expertise in SQL and Python β€’ Experience building and maintaining live dashboards to monitor user behavior, product stability, and model performance β€’ Strong problem-solving skills, attention to detail, and the ability to work effectively in a collaborative, cross-functional team environment β€’ Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders Bonus Qualifications β€’ Expertise in AI/ML frameworks like TensorFlow or PyTorch, with hands-on experience in deploying/optimizing deep learning models β€’ Familiarity with AI model training and fine tuning techniques, including an understanding of the challenges associated with multi-modal data β€’ Proficiency in additional languages such as R, Scala, or Java β€’ Experience with containerization and orchestration tools like Docker and Kubernetes β€’ Experience with NoSQL, Change Data Capture Techniques, and BI Tools Benefits β€’ PTO and paid holidays! β€’ Pre-tax commuter benefits β€’ Employer (HireArt) Subsidized healthcare benefits β€’ Flexible Spending Account for healthcare-related costs β€’ HireArt covers all costs for short- and long-term disability and life insurance β€’ 401k package Commitment: This is a full-time, 9-month contract position staffed via HireArt. This position will be remote with preference given to candidates based in the Bay Area who are open to working onsite in Los Altos, CA. HireArt values diversity and is an Equal Opportunity Employer. We are interested in every qualified candidate who is eligible to work in the United States. Unfortunately, we are not able to sponsor visas or employ corp-to-corp.