

Data Engineer – Databricks | Python | AWS | CI/CD(Fece to Face Interview)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Databricks, Python, AWS, and CI/CD. The contract is on-site in Boston, MA, offering a competitive pay rate. Key skills include Apache Spark, Git, and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
July 12, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#Databricks #GitHub #Redshift #Spark (Apache Spark) #Python #Terraform #Scala #Jenkins #Apache Spark #Scripting #PySpark #Data Engineering #Data Pipeline #DevOps #Azure DevOps #S3 (Amazon Simple Storage Service) #Data Analysis #Cloud #CLI (Command-Line Interface) #AWS (Amazon Web Services) #Data Science #Data Manipulation #Azure #Lambda (AWS Lambda) #GIT #REST API #REST (Representational State Transfer)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Data Engineer – Databricks | Python | AWS | CI/CD
Location: [Boston , MA ] Need local profile only
Type: Contract
Experience: 5+ Years
Job Summary:
We are looking for an experienced Data Engineer with strong hands-on skills in Databricks, Python, AWS services, and CI/CD pipelines to develop scalable data solutions and automate data workflows. The candidate will work closely with data analysts, data scientists, and DevOps teams to build and maintain high-performance, reliable data pipelines.
Required Skills:
Strong experience with Databricks (notebooks, jobs, clusters, workflows)
Proficient in Python (data manipulation, APIs, scripting)
Hands-on with AWS ecosystem: S3, Glue, Lambda, Redshift, CloudWatch
Solid knowledge of Apache Spark and PySpark
Experience with CI/CD tools: Git, Jenkins, GitHub Actions, or Azure DevOps
Familiarity with Terraform, Databricks CLI, or REST APIs is a plus