

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in San Jose, CA or Seattle, WA (Hybrid) for 6 months at $69.72/hour. Requires 10+ years in data pipelines, proficiency in SQL, PySpark, Python, and experience with cloud platforms and orchestration tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
552
-
ποΈ - Date discovered
June 10, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#Tableau #Monitoring #SQL (Structured Query Language) #PySpark #Spark (Apache Spark) #Data Pipeline #Documentation #Scala #AWS (Amazon Web Services) #Compliance #Python #Databricks #Anomaly Detection #Cloud #AWS Glue #Data Science #GCP (Google Cloud Platform) #Airflow #Azure #BI (Business Intelligence) #Big Data #"ETL (Extract #Transform #Load)" #Data Processing #Data Privacy #Microsoft Power BI #Datasets #Data Engineering
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Location: San Jose, CA or Seattle, WA (Hybrid - 3 days/week onsite)
Duration: 6 months
Contract Type: W2 only
Pay Rate: $69.72/Hour
We are seeking a highly skilled Senior Data Engineer to join our Data Science Engineering team. In this role, you will design and maintain scalable data pipelines, orchestrate workflows, and develop dashboards that empower data-driven decision-making.
Key Responsibilities
β’ Design, build, and maintain robust data pipelines to support analytics and dashboarding needs.
β’ Collaborate with stakeholders to gather business requirements and translate them into actionable data solutions.
β’ Develop modular and efficient ETL workflows using orchestration tools.
β’ Ensure comprehensive documentation for maintainability and knowledge sharing.
β’ Build and deploy cloud-native applications (AWS, Azure, or GCP) with a focus on cost optimization.
β’ Write scalable, distributed data processing code using PySpark, Python, and SQL.
β’ Integrate with RESTful APIs to consume and process external data services.
β’ Create high-performance dashboards using Tableau or Power BI.
Required Skills & Experience
β’ 10+ years of experience in building scalable data pipelines and analytical dashboards.
β’ Proficiency in SQL, PySpark, and Python for Big Data processing.
β’ Strong experience with cloud platforms and data warehousing technologies (e.g., Databricks).
β’ Expertise in workflow orchestration tools (e.g., Airflow, AWS Glue).
β’ Experience designing architectures for complex workflows, including data privacy and deletion.
β’ Proven ability to reduce technical debt and automate data workflows.
β’ Deep understanding of data privacy, governance, and compliance.
β’ Experience handling petabyte-scale datasets.
β’ Familiarity with anomaly detection, monitoring, and alerting systems.
β’ Strong documentation and presentation skills.
β’ Skilled in performance tuning for dashboards and queries.
β’ Knowledge of Medallion Architecture and Delta Live Tables.
Preferred Qualifications
β’ Demonstrated ability to quickly adapt to new technologies and environments.
β’ Self-starter with a proven track record of delivering high-quality solutions with minimal ramp-up time.
β’ Strong problem-solving skills and a proactive mindset.