

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "Unknown," offering up to $69.72 per hour. Key skills include 10+ years in data pipelines, proficiency in SQL, PySpark, Python, and cloud platforms. Experience with ETL workflows and data governance is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
552
-
ποΈ - Date discovered
July 8, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Jose, CA
-
π§ - Skills detailed
#Data Pipeline #AWS Glue #Tableau #Spark (Apache Spark) #Cloud #Data Engineering #AWS (Amazon Web Services) #Compliance #Datasets #Documentation #GCP (Google Cloud Platform) #Anomaly Detection #Python #Big Data #Data Processing #Airflow #Azure #Scala #Microsoft Power BI #SQL (Structured Query Language) #Data Privacy #PySpark #Databricks #Monitoring #Data Science #BI (Business Intelligence) #"ETL (Extract #Transform #Load)"
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking a highly skilled Senior Data Engineer to join our Data Science Engineering team. In
this role, you will design and maintain scalable data pipelines, orchestrate workflows, and develop
dashboards that empower data-driven decision-making.
Responsibilities:
β’ Design, build, and maintain robust data pipelines to support analytics and dashboarding needs.
β’ Collaborate with stakeholders to gather business requirements and translate them into actionable data solutions.
β’ Develop modular and efficient ETL workflows using orchestration tools.
β’ Ensure comprehensive documentation for maintainability and knowledge sharing.
β’ Build and deploy cloud-native applications (AWS, Azure, or GCP) with a focus on cost optimization.
β’ Write scalable, distributed data processing code using PySpark, Python, and SQL.
β’ Integrate with RESTful APIs to consume and process external data services.
β’ Create high-performance dashboards using Tableau or Power BI.
Skills and Experience:
β’ 10+ years of experience in building scalable data pipelines and analytical dashboards.
β’ Proficiency in SQL, PySpark, and Python for Big Data processing.
β’ Strong experience with cloud platforms and data warehousing technologies (e.g., Databricks).
β’ Expertise in workflow orchestration tools (e.g., Airflow, AWS Glue).
β’ Experience designing architectures for complex workflows, including data privacy and deletion.
β’ Proven ability to reduce technical debt and automate data workflows.
β’ Deep understanding of data privacy, governance, and compliance.
β’ Experience handling petabyte-scale datasets.
β’ Familiarity with anomaly detection, monitoring, and alerting systems.
β’ Strong documentation and presentation skills.
β’ Skilled in performance tuning for dashboards and queries.
β’ Knowledge of Medallion Architecture and Delta Live Tables.
Preferred Qualifications:
β’ Demonstrated ability to quickly adapt to new technologies and environments.
β’ Self-starter with a proven track record of delivering high-quality solutions with minimal ramp-up time.
β’ Strong problem-solving skills and a proactive mindset.
Compensation:
β’ Up to $69.72 per hour.
36056341