

Integrated Resources, Inc ( IRI )
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Richmond, VA (Hybrid) for 12+ months at $60/hr. Requires 5-7 years of experience, expertise in SQL, ELT, Oracle Exadata, Snowflake, and data pipeline design. Bachelor's degree in Computer Science or related field is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
May 16, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Richmond, VA
-
π§ - Skills detailed
#Snowflake #Dremio #Indexing #Virtualization #Mathematics #Oracle #Apache Kafka #Cloud #Informatica #Data Quality #Spark (Apache Spark) #SQL (Structured Query Language) #Data Cleaning #Kafka (Apache Kafka) #Data Modeling #Scala #Data Pipeline #GCP (Google Cloud Platform) #Code Reviews #Documentation #Observability #Oracle Exadata #Data Lineage #Big Data #ML (Machine Learning) #Data Engineering #Data Warehouse #Computer Science #Azure #Scripting #Monitoring #Metadata #Batch #Talend #AWS (Amazon Web Services) #dbt (data build tool) #Python #Security #DataOps #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence)
Role description
Job Title: Data Engineer
Job Location: Richmond, VA (Hybrid β 1 week Onsite/ 1 week Remote)
Job Duration: 12+ Months (Possibility of extension)
Pay rate: $60/hr. on w2
Note: Candidate should willing to work on w2
Job Description:
Job Summary:
β’ The Senior Data Engineer is a hands-on expert and technical leader, actively engaged in designing, building, and optimizing scalable, reliable data pipelines at an enterprise level. This role not only guides architectural decisions but also directly implements advanced ELT solutions, troubleshoots complex data challenges, and ensures best practices through practical, high-impact contributions.
β’ This role combines deep hands-on expertise with technical ownership, mentoring, and architectural alignment. The Senior Data Engineer drives and implements data engineering best practices, ensures high standards for quality and security, and partners with architecture
Key Responsibilities:
β’ Build end-to-end data pipelines and ETL/ELT solutions to support analytics, reporting, and AI/ML use cases, ensuring solutions are robust and production-ready through practical implementation.
β’ Apply scalable patterns for batch and incremental processing by developing, testing, and deploying data workflows, focusing on hands-on coding and troubleshooting.
β’ Review and implement data modeling, transformation logic, and performance strategies, using deep technical expertise to optimize and validate solutions.
β’ Evaluate, select, and integrate tooling, frameworks, and platform capabilities by actively prototyping and configuring systems to meet project requirements.
β’ Build up complex, high-volume data pipelines using SQL-centric ETL/ELT patterns.
β’ Design and implement scalable streaming pipelines to process real-time data, ensuring low latency and reliable delivery for analytics and operational use cases.
β’ Lead performance tuning efforts across pipelines, warehouses, and workloads.
β’ Ensure data pipelines are resilient, observable, and production ready.
β’ Implement enterprise-grade error handling, restart ability, and monitoring.
β’ Build and maintain scalable, low-latency streaming data pipelines using technologies such as Kafka, Kinesis, or Spark Streaming
β’ Perform on-the-fly data cleaning, validation, and enrichment before data reaches its final destination.
β’ Uses strategies such as Indexing and partitioning to fine tune the data warehouse and big data environments to improve the query response time and scalability
β’ Implement standards for data quality checks, validation, and reconciliation.
β’ Ensure pipelines meet security, access control, and governance requirements.
β’ Partner with governance & DataOps teams on metadata, lineage, and auditability.
β’ Apply consistent naming conventions, documentation, and coding standards.
β’ Improve operational monitoring, alerting, and incident response processes.
β’ Proactively identify reliability, performance, and cost optimization opportunities.
β’ Support and guide production troubleshooting and root cause analysis.
β’ Investigating data quality incidents and identifying design/coding GAPs
β’ Participate in design and code reviews to enforce quality and best practices.
β’ Partner with various infrastructure teams, application teams, and architects to generate process designs and complex transformations to various data elements to provide the Business with insights into their business processes.
β’ Translate ambiguous requirements into well designed technical solutions.
β’ Work in complex multi-platform environments on multiple project assignments.
Required Skills & Experience:
β’ Advanced expertise in SQL, ELT patterns, and performance tuning.
β’ Strong experience with Oracle Exadata, Snowflake or similar cloud/on prem data warehouses.
β’ Hands on experience with enterprise ETL/ELT platforms (e.g., Talend, dbt, Informatica).
β’ Deep understanding of data warehousing architecture and dimensional modeling.
β’ Experience designing and supporting large scale, production data pipelines.
β’ Strong scripting experience (Python, shell).
β’ Experience with data virtualization tools (e.g., Denodo, Composite, dremio, Starburst).
β’ Experience with DataOps practices, CI/CD, and observability.
β’ Required 5 to 7+ years of Data Engineering experience.
β’ ETL Development and Process Support, may require weekend/off business hours work.
Key Behavioural Expectations ("How We Work"):
β’ Delivering at Pace: Acts with urgency to meet deadlines while maintaining quality, prioritizing workload effectively under pressure.
β’ Collaborative Teamwork: Actively supports colleagues, shares knowledge freely, and treats others with dignity and respect.
β’ Effective Communication: Tailors communication style to the audience, ensuring clarity and transparency in both successes and challenges.
β’ Ownership and Adaptability: Takes responsibility for outcomes and welcomes feedback for improvement.
β’ Ability to work independently
β’ Achievement orientation
β’ Self-starter
β’ Concern for quality
β’ Flexibility
Preferred / Nice to Have:
β’ Experience supporting AI/ML or advanced analytics pipelines.
β’ Cloud platform experience (AWS, Azure, or GCP).
β’ Prior experience influencing enterprise data standards or reference architecture.
β’ Experience optimizing cost and performance in cloud data warehouses.
β’ Hands-on experience with Cribl, Apache Kafka, Kafka Connect, Spark Streaming, or Apache Flink
Required Years of Experience:
β’ MUST have 5 to 7+ years of Data Engineering experience.
Education:
β’ Education: Bachelors or higher required
β’ Discipline: Computer Science, Information Systems, Mathematics
Are there any specific companies/industries youβd like to see in the candidateβs experience?
β’ High Preference for candidates that have previously worked with a large-scale commercial utilities team but will review candidates who have a background with large scale capital projects for companies
Preferred Interview Process Overview (High level):
Teams β Camera On
Job Title: Data Engineer
Job Location: Richmond, VA (Hybrid β 1 week Onsite/ 1 week Remote)
Job Duration: 12+ Months (Possibility of extension)
Pay rate: $60/hr. on w2
Note: Candidate should willing to work on w2
Job Description:
Job Summary:
β’ The Senior Data Engineer is a hands-on expert and technical leader, actively engaged in designing, building, and optimizing scalable, reliable data pipelines at an enterprise level. This role not only guides architectural decisions but also directly implements advanced ELT solutions, troubleshoots complex data challenges, and ensures best practices through practical, high-impact contributions.
β’ This role combines deep hands-on expertise with technical ownership, mentoring, and architectural alignment. The Senior Data Engineer drives and implements data engineering best practices, ensures high standards for quality and security, and partners with architecture
Key Responsibilities:
β’ Build end-to-end data pipelines and ETL/ELT solutions to support analytics, reporting, and AI/ML use cases, ensuring solutions are robust and production-ready through practical implementation.
β’ Apply scalable patterns for batch and incremental processing by developing, testing, and deploying data workflows, focusing on hands-on coding and troubleshooting.
β’ Review and implement data modeling, transformation logic, and performance strategies, using deep technical expertise to optimize and validate solutions.
β’ Evaluate, select, and integrate tooling, frameworks, and platform capabilities by actively prototyping and configuring systems to meet project requirements.
β’ Build up complex, high-volume data pipelines using SQL-centric ETL/ELT patterns.
β’ Design and implement scalable streaming pipelines to process real-time data, ensuring low latency and reliable delivery for analytics and operational use cases.
β’ Lead performance tuning efforts across pipelines, warehouses, and workloads.
β’ Ensure data pipelines are resilient, observable, and production ready.
β’ Implement enterprise-grade error handling, restart ability, and monitoring.
β’ Build and maintain scalable, low-latency streaming data pipelines using technologies such as Kafka, Kinesis, or Spark Streaming
β’ Perform on-the-fly data cleaning, validation, and enrichment before data reaches its final destination.
β’ Uses strategies such as Indexing and partitioning to fine tune the data warehouse and big data environments to improve the query response time and scalability
β’ Implement standards for data quality checks, validation, and reconciliation.
β’ Ensure pipelines meet security, access control, and governance requirements.
β’ Partner with governance & DataOps teams on metadata, lineage, and auditability.
β’ Apply consistent naming conventions, documentation, and coding standards.
β’ Improve operational monitoring, alerting, and incident response processes.
β’ Proactively identify reliability, performance, and cost optimization opportunities.
β’ Support and guide production troubleshooting and root cause analysis.
β’ Investigating data quality incidents and identifying design/coding GAPs
β’ Participate in design and code reviews to enforce quality and best practices.
β’ Partner with various infrastructure teams, application teams, and architects to generate process designs and complex transformations to various data elements to provide the Business with insights into their business processes.
β’ Translate ambiguous requirements into well designed technical solutions.
β’ Work in complex multi-platform environments on multiple project assignments.
Required Skills & Experience:
β’ Advanced expertise in SQL, ELT patterns, and performance tuning.
β’ Strong experience with Oracle Exadata, Snowflake or similar cloud/on prem data warehouses.
β’ Hands on experience with enterprise ETL/ELT platforms (e.g., Talend, dbt, Informatica).
β’ Deep understanding of data warehousing architecture and dimensional modeling.
β’ Experience designing and supporting large scale, production data pipelines.
β’ Strong scripting experience (Python, shell).
β’ Experience with data virtualization tools (e.g., Denodo, Composite, dremio, Starburst).
β’ Experience with DataOps practices, CI/CD, and observability.
β’ Required 5 to 7+ years of Data Engineering experience.
β’ ETL Development and Process Support, may require weekend/off business hours work.
Key Behavioural Expectations ("How We Work"):
β’ Delivering at Pace: Acts with urgency to meet deadlines while maintaining quality, prioritizing workload effectively under pressure.
β’ Collaborative Teamwork: Actively supports colleagues, shares knowledge freely, and treats others with dignity and respect.
β’ Effective Communication: Tailors communication style to the audience, ensuring clarity and transparency in both successes and challenges.
β’ Ownership and Adaptability: Takes responsibility for outcomes and welcomes feedback for improvement.
β’ Ability to work independently
β’ Achievement orientation
β’ Self-starter
β’ Concern for quality
β’ Flexibility
Preferred / Nice to Have:
β’ Experience supporting AI/ML or advanced analytics pipelines.
β’ Cloud platform experience (AWS, Azure, or GCP).
β’ Prior experience influencing enterprise data standards or reference architecture.
β’ Experience optimizing cost and performance in cloud data warehouses.
β’ Hands-on experience with Cribl, Apache Kafka, Kafka Connect, Spark Streaming, or Apache Flink
Required Years of Experience:
β’ MUST have 5 to 7+ years of Data Engineering experience.
Education:
β’ Education: Bachelors or higher required
β’ Discipline: Computer Science, Information Systems, Mathematics
Are there any specific companies/industries youβd like to see in the candidateβs experience?
β’ High Preference for candidates that have previously worked with a large-scale commercial utilities team but will review candidates who have a background with large scale capital projects for companies
Preferred Interview Process Overview (High level):
Teams β Camera On






