Oliver James

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, contract duration 6-12 months, with a pay rate of $60-80 per hour. Remote work, requiring 4+ years of experience, proficiency in Databricks, PySpark, SQL, and cloud platforms (AWS, Azure, GCP).
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
April 1, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Big Data #AWS (Amazon Web Services) #Fivetran #Data Warehouse #PySpark #Spark (Apache Spark) #Data Lake #Data Engineering #Cloud #Compliance #Vault #GCP (Google Cloud Platform) #Data Governance #Data Quality #Data Modeling #"ETL (Extract #Transform #Load)" #SQL Queries #Security #Data Science #Data Vault #Databricks #Scala #SQL (Structured Query Language) #Monitoring #Datasets #Azure
Role description
Data Engineer Remote Pay Rate: $60-80 per/hr Type: Contract with potential for C2H (Contract to Hire) W2 only Contract Duration: 6-12 months + • Not Open to 3rd Parties About the Role Currently working with a company that is seeking a Data Engineer to help design, build, and maintain scalable data platforms that support analytics, operations, and decision-making across the organization. This role sits at the intersection of cloud infrastructure, big data engineering, and business analytics, enabling teams to reliably access high-quality data at scale. You'll work closely with data scientists, analysts, and business stakeholders to ensure data is accurate, performant, and available to power critical initiatives across energy generation, customer analytics, pricing, and operations. Key Responsibilities • Design, build, and maintain ETL/ELT pipelines to ingest, transform, and curate large-scale datasets. • Develop and optimize data workflows using Databricks and PySpark. • Build and manage data warehouses and data lake architectures. • Write and optimize complex SQL queries for analytics and reporting use cases. • Partner with analytics and business teams to translate requirements into scalable data solutions. • Ensure data quality, reliability, monitoring, and performance. • Work within cloud environments (AWS, Azure, and/or GCP) to manage data infrastructure. • Support data governance, security, and compliance requirements in a regulated environment. • Continuously improve pipeline efficiency, scalability, and cost optimization. Required Qualifications • 4+ years of experience in a Data Engineer, Data Platform Engineer, or similar role. • Hands-on experience with Databricks and PySpark. • Strong proficiency in SQL and data modeling concepts. • Experience designing and maintaining ETL/ELT pipelines. • Experience working with cloud data platforms (AWS, Azure, or GCP). • Solid understanding of data warehousing and lakehouse architectures. • Experience working with large, complex datasets in production environments. Bonus Qualifications • Experience with workflow orchestration tools like FiveTran. • Knowledge of Data Vault methodoly for designing scalable and flexible data models.