

Resolve Tech Solutions
Senior Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect with a 12+ month contract in Spring, TX, offering competitive pay. Key skills include Python, Snowflake, and Databricks. Candidates must have upstream oil and gas experience and a relevant bachelor's degree.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 30, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Spring, TX
-
π§ - Skills detailed
#Spotfire #Computer Science #Cloud #Python #Scala #"ETL (Extract #Transform #Load)" #Snowflake #Automation #Apache Iceberg #DevOps #Data Governance #Data Pipeline #AI (Artificial Intelligence) #Data Engineering #Databricks #Data Science #Azure #Datasets #Programming #Data Ingestion #Compliance #BI (Business Intelligence) #AWS (Amazon Web Services) #Data Lineage #Delta Lake #Data Architecture #Data Lakehouse #ML (Machine Learning) #SQL (Structured Query Language) #Data Lake #Microsoft Power BI
Role description
Job Title: Sr. Level Upstream Data Architect-Strong Python
Duration: 12+ Months
Location: Spring, TX
Description:
The Upstream Data Engineer will design, develop, and optimize enterprise data solutions that support drilling, reservoir engineering, completions, production optimization, and broader subsurface workflows. This role combines advanced data engineering expertise with deep functional knowledge of upstream oil and gas to enable high-quality analytics and accelerate operational decision making.
Key Responsibilities
β’ Architect, build, and maintain scalable data pipelines for drilling, reservoir, and production datasets leveraging Python and modern ELT/ETL frameworks
β’ Ingest, harmonize, and curate industry data sources such as WITSML, ProdML, LAS, SCADA historian data, seismic, well logs, and WellView/OpenWells datasets
β’ Design and implement robust data models in Snowflake and Databricks to support operational reporting, subsurface analytics, AI/ML, and reservoir engineering workflows
β’ Utilize open table formats such as Apache Iceberg to support efficient data lineage, versioning, governance, and incremental processing
β’ Collaborate with drilling, geoscience, and reservoir engineering stakeholders to translate business requirements into reusable technology solutions
β’ Apply orchestration, CI/CD, and DevOps practices to ensure reliability and automation across cloud environments
β’ Improve data product performance, availability, quality, and compliance aligned with upstream data governance standards and PPDM/O&G reference models
β’ Troubleshoot and support production data pipelines and ensure secure, optimized access to datasets
Required Qualifications
β’ Bachelorβs degree in Petroleum Engineering, Computer Science, Data Engineering, or related technical discipline
β’ Proven experience working directly within upstream oil and gas domains such as drilling operations, reservoir management, completions, or production engineering
β’ Strong Python programming skills and experience building reusable transformation frameworks
β’ Hands-on experience with Snowflake and Databricks including Delta Lake or similar distributed processing capabilities
β’ Experience with open data lakehouse architectures and formats (Apache Iceberg preferred)
β’ Proficiency in SQL, cloud services (Azure or AWS), distributed compute concepts, and data ingestion frameworks
β’ Solid understanding of the well lifecycle, subsurface engineering concepts, and upstream operational KPIs
Preferred Skills
β’ Experience with Cognite Data Fusion for contextualization and integration of operational, engineering, and IT data to enable analytics and AI solutions
β’ Familiarity with OSDU data platform or PPDM standards for upstream data governance
β’ Experience building analytics-ready datasets for data science and real-time operational decision support
β’ Knowledge of BI reporting tools such as Power BI or Spotfire used in E&P environments
β’ Exposure to real-time data ingestion from drilling rigs, control systems, or production operations
Job Title: Sr. Level Upstream Data Architect-Strong Python
Duration: 12+ Months
Location: Spring, TX
Description:
The Upstream Data Engineer will design, develop, and optimize enterprise data solutions that support drilling, reservoir engineering, completions, production optimization, and broader subsurface workflows. This role combines advanced data engineering expertise with deep functional knowledge of upstream oil and gas to enable high-quality analytics and accelerate operational decision making.
Key Responsibilities
β’ Architect, build, and maintain scalable data pipelines for drilling, reservoir, and production datasets leveraging Python and modern ELT/ETL frameworks
β’ Ingest, harmonize, and curate industry data sources such as WITSML, ProdML, LAS, SCADA historian data, seismic, well logs, and WellView/OpenWells datasets
β’ Design and implement robust data models in Snowflake and Databricks to support operational reporting, subsurface analytics, AI/ML, and reservoir engineering workflows
β’ Utilize open table formats such as Apache Iceberg to support efficient data lineage, versioning, governance, and incremental processing
β’ Collaborate with drilling, geoscience, and reservoir engineering stakeholders to translate business requirements into reusable technology solutions
β’ Apply orchestration, CI/CD, and DevOps practices to ensure reliability and automation across cloud environments
β’ Improve data product performance, availability, quality, and compliance aligned with upstream data governance standards and PPDM/O&G reference models
β’ Troubleshoot and support production data pipelines and ensure secure, optimized access to datasets
Required Qualifications
β’ Bachelorβs degree in Petroleum Engineering, Computer Science, Data Engineering, or related technical discipline
β’ Proven experience working directly within upstream oil and gas domains such as drilling operations, reservoir management, completions, or production engineering
β’ Strong Python programming skills and experience building reusable transformation frameworks
β’ Hands-on experience with Snowflake and Databricks including Delta Lake or similar distributed processing capabilities
β’ Experience with open data lakehouse architectures and formats (Apache Iceberg preferred)
β’ Proficiency in SQL, cloud services (Azure or AWS), distributed compute concepts, and data ingestion frameworks
β’ Solid understanding of the well lifecycle, subsurface engineering concepts, and upstream operational KPIs
Preferred Skills
β’ Experience with Cognite Data Fusion for contextualization and integration of operational, engineering, and IT data to enable analytics and AI solutions
β’ Familiarity with OSDU data platform or PPDM standards for upstream data governance
β’ Experience building analytics-ready datasets for data science and real-time operational decision support
β’ Knowledge of BI reporting tools such as Power BI or Spotfire used in E&P environments
β’ Exposure to real-time data ingestion from drilling rigs, control systems, or production operations






