Data Engineer (Hybrid) – Glendale, AZ (Contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (Hybrid) position in Glendale, AZ, for an estimated 12-month contract at a pay rate of "USD/hour." Requires a Bachelor's degree, 5+ years of experience, and expertise in big data platforms like Azure or AWS.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 12, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Glendale, AZ
-
🧠 - Skills detailed
#Deployment #Azure Data Factory #Python #ADaM (Analysis Data Model) #Data Catalog #Data Lake #Storage #ADLS (Azure Data Lake Storage) #Data Governance #Data Modeling #AI (Artificial Intelligence) #Snowflake #Data Ingestion #Programming #Computer Science #Delta Lake #Batch #Azure #Azure ADLS (Azure Data Lake Storage) #Data Design #MLflow #Palantir Foundry #Spark (Apache Spark) #Keras #AzureML #TensorFlow #SQL (Structured Query Language) #Databricks #AWS (Amazon Web Services) #Data Engineering #Hadoop #ADF (Azure Data Factory) #Distributed Computing #Data Science #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #PyTorch #Scala #Big Data #Data Pipeline #PySpark #Monitoring #Cloud
Role description
Overview JAB Recruitment is actively seeking a Data Engineer to support one of our prestigious international energy clients at their Glendale, Arizona location. This is an exciting opportunity to work with a world-class team in a fast-paced, professional environment. To be successful in this role you should have experience in designing, developing, and deploying industry-leading data science and big data engineering solutions, using Artificial Intelligence (AI), Machine Learning (ML), and big data platforms and technologies, to increase efficiency in the complex work processes, enable and empower data-driven decision making, planning, and execution throughout the lifecycle of mega-EPC projects. PLEASE NOTE: • This is a contract position - estimated 12 months • Hybrid role - 3 days in office required each week • Candidates must be authorized to work in the US indefinitely without present or future need for visa sponsorship. No sponsorship available Responsibilities • Big data design and analysis, data modeling, development, deployment, and operations of big data pipelines • Collaborate with a team of other data engineers, data scientists, and business subject matter experts to process data and prepare data sources for a variety of use cases including predictive analytics, generative AI, and computer vision. • Mentor other data engineers to develop a world class data engineering team • Ingest, Process, and Model data from structured, unstructured, batch and real-time sources using the latest techniques and technology stack. Minimum Requirements • Bachelor’s degree or higher in Computer Science, or equivalent degree and 5+ years working experience • In depth experience with a big data cloud platform such as Azure, AWS, Snowflake, Palantir, etc. • Strong grasp of programming languages (Python, Scala, SQL, Panda, PySpark, or equivalent) and a willingness to learn new ones. Strong understanding of structuring code for testability. • Experience writing database-heavy services or APIs • Strong hands-on experience building and optimizing scalable data pipelines, complex transformations, architecture, and data sets with Databricks or Spark, Azure Data Factory, and/or Palantir Foundry for data ingestion and processing • Proficient in distributed computing frameworks, with familiarity in handling drivers, executors, and data partitions in Hadoop or Spark. • Working knowledge of queueing, stream processing, and highly scalable data stores such as Hadoop, Delta Lake, Azure Data Lake Storage (ADLS), etc. • Deep understanding of data governance, access control, and secure view implementation • Experience in workflow orchestration and monitoring • Experience working with and supporting cross-functional teams Preferred Qualifications • Experience with schema evolution, data versioning, and Delta Lake optimization • Exposure to data cataloging solutions in Foundry Ontology • Professional experience implementing complex ML architectures in popular frameworks such as Tensorflow, Keras, PyTorch, Sci-kit Learn, and CNTK • Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureML JAB Recruitment is an equal opportunity employer. Qualified applicants are considered for positions without regard to race, color, religion, sex, national origin, age, citizenship status, marital status, medical condition, physical or mental disability or any other legally protected status. EOE/M/F/D/V #LI-DNI