

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Glendale, AZ (Hybrid - 3 days onsite) for 12 months, offering a competitive pay rate. Key skills include big data platforms (Azure, AWS), Python, SQL, and experience with Databricks or Spark.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date discovered
August 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Glendale, AZ
-
π§ - Skills detailed
#Deployment #Azure Data Factory #Python #ADaM (Analysis Data Model) #Data Catalog #Data Lake #Storage #ADLS (Azure Data Lake Storage) #Data Governance #Data Modeling #AI (Artificial Intelligence) #Snowflake #Data Ingestion #Programming #Computer Science #Delta Lake #Batch #Azure #Azure ADLS (Azure Data Lake Storage) #Data Design #MLflow #Palantir Foundry #Spark (Apache Spark) #Keras #AzureML #TensorFlow #SQL (Structured Query Language) #Databricks #AWS (Amazon Web Services) #Data Engineering #Hadoop #ADF (Azure Data Factory) #Distributed Computing #Data Science #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #PyTorch #Scala #Big Data #Data Pipeline #PySpark #Monitoring #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Location: Glendale, AZ (Hybrid - 3 days onsite)
Duration: 12 Months
Summary:
Client is seeking a talented and ambitious data engineer to join our team in designing, developing, and deploying industry-leading data science and big data engineering solutions, using Artificial Intelligence (AI), Machine Learning (ML), and big data platforms and technologies, to increase efficiency in the complex work processes, enable and empower data-driven decision making, planning, and execution throughout the lifecycle of mega-EPC projects.
Who you are:
β’ You yearn to be part of groundbreaking projects and cutting-edge research that work to deliver world-class solutions on schedule
β’ Someone who is motivated to find opportunity in and develop solutions for evolving challenges, is passionate about their craft, and driven to deliver exceptional results
β’ You love to learn new technologies and mentor junior engineers to raise the bar on your team
β’ You are imaginative and engaged about intuitive user interfaces, as well as new/emerging concepts and techniques
Job Responsibilities:
β’ Big data design and analysis, data modeling, development, deployment, and operations of big data pipelines
β’ Collaborate with a team of other data engineers, data scientists, and business subject matter experts to process data and prepare data sources for a variety of use cases including predictive analytics, generative AI, and computer vision.
β’ Mentor other data engineers to develop a world class data engineering team
β’ Ingest, Process, and Model data from structured, unstructured, batch and real-time sources using the latest techniques and technology stack.
Basic Qualifications:
β’ Bachelorβs degree or higher in Computer Science, or equivalent degree and 5+ years working experience
β’ In depth experience with a big data cloud platform such as Azure, AWS, Snowflake, Palantir, etc.
β’ Strong grasp of programming languages (Python, Scala, SQL, Panda, PySpark, or equivalent) and a willingness to learn new ones. Strong understanding of structuring code for testability.
β’ Experience writing database-heavy services or APIs
β’ Strong hands-on experience building and optimizing scalable data pipelines, complex transformations, architecture, and data sets with Databricks or Spark, Azure Data Factory, and/or Palantir Foundry for data ingestion and processing
β’ Proficient in distributed computing frameworks, with familiarity in handling drivers, executors, and data partitions in Hadoop or Spark.
β’ Working knowledge of queueing, stream processing, and highly scalable data stores such as Hadoop, Delta Lake, Azure Data Lake Storage (ADLS), etc.
β’ Deep understanding of data governance, access control, and secure view implementation
β’ Experience in workflow orchestration and monitoring
β’ Experience working with and supporting cross-functional teams
Preferred Qualifications:
β’ Experience with schema evolution, data versioning, and Delta Lake optimization
β’ Exposure to data cataloging solutions in Foundry Ontology
β’ Professional experience implementing complex ML architectures in popular frameworks such as Tensorflow, Keras, PyTorch, Sci-kit Learn, and CNTK
β’ Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureML