FUSTIS LLC

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Cambridge, MA, offering a 12-month contract at $50-54/hr. Key skills include Databricks, AWS services, ETL/ELT pipeline development, and Python/SQL programming. A Bachelor's or Master's degree in a relevant field is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
432
-
πŸ—“οΈ - Date
March 10, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Cambridge, MA
-
🧠 - Skills detailed
#Datasets #EC2 #Complex Queries #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Data Governance #Cloud #Compliance #Python #SQL (Structured Query Language) #Scala #Data Pipeline #Terraform #Data Architecture #R #Delta Lake #Bash #Computer Science #IAM (Identity and Access Management) #Data Processing #Data Engineering #Programming #Infrastructure as Code (IaC) #Lambda (AWS Lambda) #Deployment #Databricks #"ETL (Extract #Transform #Load)" #Documentation #Security #GitHub #Automation
Role description
Data and Systems Engineer Job Location: Cambridge, MA – 02140 (Hybrid) Contract Duration: 12 Months of Contract role Pay rate: $50-54/hr on W2 USC, GC, GC-Ead, TN Local candidates DL & LinkedIn needed Skills: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems. Databricks (Unity Catalog & Delta Lake) - Hands-on experience building and managing data pipelines, clusters, notebooks, and governance using Unity Catalog and Delta Lake. AWS Cloud Services - Strong working knowledge of S3, IAM, Lambda, EC2, Glue, and EMR to build and manage cloud-based data infrastructure. ETL / ELT Data Pipeline Development - Experience designing, building, and optimizing scalable data pipelines for ingesting, transforming, and processing large datasets. Python & SQL Programming - Ability to write data processing scripts, automation workflows, and complex queries for analytics and engineering tasks. Cloud Data Architecture & System Integration - Data platform architecture, system integrations, and scalable cloud environments. CI/CD & Infrastructure Automation - Experience with Terraform, GitHub Actions, or other CI/CD tools to automate deployments and manage infrastructure as code. Required: Essential Skills & Experience: Hands-on experience with Databricks (Unity Catalog, Delta Lake, clusters, notebooks). Knowledge of Posit Workbench/Connect/Package Manager and R/Python workflows. Experience with AWS services: S3, IAM, Lambda, EC2, Glue, EMR, and Cloud Watch. Ability to develop automation and workflows using Python, SQL, R, Bash, or PowerShell. Understanding of cloud architectures, data engineering patterns, and system integrations. Desirable Skills & Experience: Hands-on modeling experience creating architecture diagrams (Visio, Draw.io, SqlDBM). Exposure to Terraform, CloudFormation, or GitHub Actions. Familiarity with scientific computing environments or R&D workflows on a global scale. Education Requirements: Should possess a Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a closely related discipline, or demonstrate equivalent professional experience in a relevant technical field. Responsibilities: Contributes to the design, implementation, and governance of scalable data, analytics, and compute platforms supporting scientific and enterprise functions. This includes working with Databricks (Unity Catalog), AWS cloud infrastructure, and Posit analytical environments to enable secure, reliable, and efficient workflows. Technical Architecture: Support development of reference architectures for data, analytics, and compute platforms. Ideate, develop, and document system design architecture and define and refine technical standards as needed. Assist with modeling data flows, source-to-target mappings, and governance frameworks. Participate in design reviews for Databricks, AWS, and Posit platform capabilities. Platform Engineering & Integration: Support development and optimization of ETL/ELT pipelines in Databricks with Unity Catalog and Delta Lake. Assist with AWS architecture including S3, Glue, IAM, Lambda, EC2, EMR, and CloudWatch. Contribute to integrations between Databricks, Posit Workbench/Connect, and scientific data platforms. Support CI/CD and IaC workflows using Terraform, GitHub Actions, and AWS-native tooling. Governance & Compliance: Support data governance initiatives including Unity Catalog permissions and lineage tracking. Assist with platform security reviews, compliance controls, and audit readiness. Contribute to documentation of architectural standards and operational safeguards. Stakeholder Engagement: Collaborate with scientists, engineers, and platform teams to understand requirements. Support onboarding, training, and documentation for Databricks, AWS, and Posit users. Participate in platform roadmap discussions and capability assessments.