

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Mountain View, CA, on a W2 contract. Requires 3+ years in Python, AWS, Linux, Terraform, and Databricks. Familiarity with CI/CD and cloud-native tools is essential; certifications are a plus.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Mountain View, CA
-
π§ - Skills detailed
#Data Processing #Scala #Logging #EC2 #Docker #Infrastructure as Code (IaC) #Datadog #AWS (Amazon Web Services) #Cloud #Monitoring #"ETL (Extract #Transform #Load)" #Security #Databricks #IAM (Identity and Access Management) #Prometheus #Automation #DevOps #RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Deployment #Terraform #Data Engineering #Linux #Python #Kubernetes #Data Pipeline
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
JOB ROLE: AWS Data Platform Engineer
LOCATION: Mountain View, CA
CONTRACT: W2 Only, No C2C/1099
VISA: GC, USC & H4 EAD
JOB DESCRIPTION:
We are seeking a highly skilled and motivated Cloud & Data Platform Engineer with strong experience in Python, AWS, Linux, Terraform, and Databricks. The ideal candidate will be responsible for developing, automating, and maintaining our cloud-based infrastructure and data platform solutions. Experience with modern DevOps practices and cloud-native tools is essential.
Expertise You'll Bring
β’ 3+ years of experience with Python in a production environment.
β’ Hands-on expertise with AWS services such as EC2, S3, IAM, Lambda, RDS, Glue, etc.
β’ Proficiency in Terraform and Infrastructure as Code (IaC) principles.
β’ Strong working knowledge of Linux system administration.
β’ Practical experience developing and deploying solutions on Databricks.
β’ Familiarity with CI/CD tools and practices.
β’ Excellent problem-solving and communication skills.
What You'll Do
β’ Develop infrastructure as code using Terraform to manage and provision AWS resources.
β’ Design, build, and maintain scalable Databricks data pipelines and workflows.
β’ Write clean, maintainable Python code for automation and data processing tasks.
β’ Manage and support Linux-based systems, ensuring system availability, performance, and security.
β’ Collaborate with data engineers, DevOps teams, and cloud architects to ensure seamless deployment and operations of data products.
β’ Implement monitoring, logging, and alerting solutions for cloud and data services.
β’ Optimize cloud resource utilization and drive cost-efficiency across AWS environments.
Good to Have:
β’ Experience with Docker and container orchestration (e.g., Kubernetes).
β’ Exposure to monitoring tools like CloudWatch, Prometheus, or Datadog.
β’ Understanding of data engineering principles and ETL frameworks.
β’ Knowledge of security best practices in cloud environments.
β’ Certifications such as AWS Certified Solutions Architect, Terraform Associate, or Databricks Certified Developer