

Shoolin Inc
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 12+ years of experience, focusing on AWS Cloud Services, Terraform, CI/CD pipelines, ETL/ELT design, and Data Governance. Proficiency in Python, SQL, and PySpark is required. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 31, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Aurora #AWS (Amazon Web Services) #NoSQL #SQL Server #GitHub #Data Modeling #Terraform #Metadata #Python #Data Engineering #Lambda (AWS Lambda) #Apache Iceberg #SSIS (SQL Server Integration Services) #Infrastructure as Code (IaC) #SQL (Structured Query Language) #Spark (Apache Spark) #Cloud #Schema Design #Kafka (Apache Kafka) #Data Management #S3 (Amazon Simple Storage Service) #PySpark #Data Quality #Redshift #"ETL (Extract #Transform #Load)" #Qlik #Data Governance
Role description
Position: Cloud & Data Engineer
Experience: 12+ Years Must
Key Responsibilities & Skills:
• Strong experience in AWS Cloud Services – S3, Redshift, Aurora Postgres, Glue, EMR, Lambda, Step Functions, CloudWatch
• Hands-on expertise with Infrastructure as Code using Terraform, Terraform Enterprise & HCP
• Building and managing CI/CD pipelines using Concourse and GitHub Actions
• Designing and optimizing ETL/ELT pipelines with Glue, PySpark, and Kafka
• Prior experience developing ETLs using SSIS and SQL Server
• Skilled in Data Modeling (Dimensional and NoSQL) and schema design
• Experience implementing Data Governance practices – data quality, lineage, and stewardship
• Proficient in Python, SQL, and PySpark
• Exposure to Qlik Replicate and other data tools
• Expertise in performance tuning and cost optimization on EMR and PySpark
• Experience using Apache Iceberg for metadata-driven upserts and historical data management
Position: Cloud & Data Engineer
Experience: 12+ Years Must
Key Responsibilities & Skills:
• Strong experience in AWS Cloud Services – S3, Redshift, Aurora Postgres, Glue, EMR, Lambda, Step Functions, CloudWatch
• Hands-on expertise with Infrastructure as Code using Terraform, Terraform Enterprise & HCP
• Building and managing CI/CD pipelines using Concourse and GitHub Actions
• Designing and optimizing ETL/ELT pipelines with Glue, PySpark, and Kafka
• Prior experience developing ETLs using SSIS and SQL Server
• Skilled in Data Modeling (Dimensional and NoSQL) and schema design
• Experience implementing Data Governance practices – data quality, lineage, and stewardship
• Proficient in Python, SQL, and PySpark
• Exposure to Qlik Replicate and other data tools
• Expertise in performance tuning and cost optimization on EMR and PySpark
• Experience using Apache Iceberg for metadata-driven upserts and historical data management






