

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in San Ramon, CA, with a contract length of "unknown." The pay rate is "unknown." Candidates should have 3+ years of experience in Python, SQL, and big data frameworks, along with cloud platform expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 7, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Ramon, CA
-
π§ - Skills detailed
#Grafana #Terraform #GCP (Google Cloud Platform) #Airflow #Spark (Apache Spark) #SQL (Structured Query Language) #Data Framework #Cloud #Storage #Data Science #Data Storage #Kafka (Apache Kafka) #Data Quality #Monitoring #dbt (data build tool) #Data Lake #Infrastructure as Code (IaC) #Docker #Python #AWS (Amazon Web Services) #DevOps #Security #Version Control #Data Pipeline #Data Engineering #Azure #Datadog #"ETL (Extract #Transform #Load)" #Agile #Kubernetes #Scala #Big Data #Computer Science
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Data Engineer
Location: San Ramon, CA Onsite
We are seeking a skilled and motivated Data Engineer to join our growing data engineering team. This role is ideal for professionals passionate about building scalable data pipelines, implementing cloud-native solutions, and driving insights from complex data environments.
Key Responsibilities:
β’
β’ Integrate structured and unstructured data from diverse sources into centralized data lakes or warehouses
β’ Collaborate with Data Scientists, Analysts, and DevOps teams to deliver high-quality solutions
β’ Implement scalable ETL/ELT frameworks across cloud platforms like AWS, Azure, or GCP
β’ Optimize data storage, retrieval, and throughput across big data ecosystems
β’ Ensure data quality, integrity, and security standards are upheld throughout lifecycle
β’ Monitor pipeline performance and resolve production issues with root cause analysis
Design, build, and maintain robust data pipelines using tools such as Python, Spark, and SQL
Required Skills & Qualifications:
Bachelor's or Master's in Computer Science, Data Engineering, or related field
β’ 3+ years of experience with Python, SQL, and big data frameworks like Spark
β’ Hands-on experience with cloud platforms (AWS, Azure, GCP) and data warehousing
β’ Proficiency with ETL tools and orchestration frameworks (e.g., Airflow, DBT)
β’ Familiarity with containerization tools like Docker and orchestration with Kubernetes
β’ Strong understanding of version control, CI/CD, and agile development methodologies
β’ Excellent problem-solving and communication skills
Preferred Qualifications:
β’ Certifications in cloud services (AWS Certified Data Analytics, Azure Data Engineer, etc.)
β’ Experience with IaC (Terraform, CloudFormation) and monitoring tools (Datadog, Grafana)
β’ Exposure to streaming data technologies like Kafka or Kinesis