

Cloud Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Engineer with a contract length of "unknown", offering a pay rate of "unknown". Required skills include Python, API, .Net, SQL, and AWS services. Candidates should have 13+ years of experience, with 8+ years focused on AWS cloud data services, preferably in BFSI.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 25, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Fort Mill, SC
-
π§ - Skills detailed
#Tableau #Security #Scrum #Data Science #AWS Glue #Business Analysis #Compliance #Storage #Data Engineering #AWS (Amazon Web Services) #Data Pipeline #Data Security #Data Storage #Data Warehouse #Visualization #Computer Science #Jenkins #Data Modeling #Monitoring #"ETL (Extract #Transform #Load)" #GitHub #.Net #S3 (Amazon Simple Storage Service) #Data Lake #Agile #Scala #Microsoft Power BI #Snowflake #BI (Business Intelligence) #API (Application Programming Interface) #Data Quality #Spark (Apache Spark) #Cloud #Datasets #Lambda (AWS Lambda) #Athena #PySpark #SQL (Structured Query Language) #Python #Redshift
Role description
Must have Python/API/.Net/SQL/AWS Services.
Job Summar
yWe are seeking a highly skilled AWS Engineer to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments
.
Key Responsibiliti
β’ esDesign, develop, and maintain ETL pipelines using AWS Glue, Glue Studio, and Glue Catalo
β’ g.Ingest, transform, and load large datasets from structured and unstructured sources into AWS data lakes/warehouse
β’ s.Work with S3, Redshift, Athena, Lambda, and Step Functions for data storage, query, and orchestratio
β’ n.Build and optimize PySpark/Scala scripts within AWS Glue for complex transformation
β’ s.Implement data quality checks, lineage, and monitoring across pipeline
β’ s.Collaborate with business analysts, data scientists, and product teams to deliver reliable data solution
β’ s.Ensure compliance with data security, governance, and regulatory requirements (BFSI preferred
β’ ).Troubleshoot production issues and optimize pipeline performanc
e.
Required Qualificati
β’ ons13+ Years of experience with at least 8+ years on AWS cloud data servic
β’ es.Strong expertise in AWS Glue, S3, Redshift, Athena, Lambda, Step Functions, CloudWat
β’ ch.Proficiency in PySpark, Python, SQL for ETL and data transformatio
β’ ns.Experience in data modeling (star, snowflake, dimensional models) and performance tuni
β’ ng.Hands-on experience with data lake/data warehouse architecture and implementati
β’ on.Strong problem-solving skills and ability to work in Agile/Scrum environmen
ts.
Preferred Qualificat
β’ ionsExperience in BFSI / Wealth Management dom
β’ ain.AWS Certified Data Analytics β Specialty or AWS Solutions Architect certificat
β’ ion.Familiarity with CI/CD pipelines for data engineering (CodePipeline, Jenkins, GitHub Actio
β’ ns).Knowledge of BI/Visualization tools like Tableau, Power BI, QuickSi
ght.
Educ
β’ ationBachelorβs degree in Computer Science, Information Technology, Engineering, or related f
β’ ield.Masterβs degree prefe
rred.
Must have Python/API/.Net/SQL/AWS Services.
Job Summar
yWe are seeking a highly skilled AWS Engineer to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments
.
Key Responsibiliti
β’ esDesign, develop, and maintain ETL pipelines using AWS Glue, Glue Studio, and Glue Catalo
β’ g.Ingest, transform, and load large datasets from structured and unstructured sources into AWS data lakes/warehouse
β’ s.Work with S3, Redshift, Athena, Lambda, and Step Functions for data storage, query, and orchestratio
β’ n.Build and optimize PySpark/Scala scripts within AWS Glue for complex transformation
β’ s.Implement data quality checks, lineage, and monitoring across pipeline
β’ s.Collaborate with business analysts, data scientists, and product teams to deliver reliable data solution
β’ s.Ensure compliance with data security, governance, and regulatory requirements (BFSI preferred
β’ ).Troubleshoot production issues and optimize pipeline performanc
e.
Required Qualificati
β’ ons13+ Years of experience with at least 8+ years on AWS cloud data servic
β’ es.Strong expertise in AWS Glue, S3, Redshift, Athena, Lambda, Step Functions, CloudWat
β’ ch.Proficiency in PySpark, Python, SQL for ETL and data transformatio
β’ ns.Experience in data modeling (star, snowflake, dimensional models) and performance tuni
β’ ng.Hands-on experience with data lake/data warehouse architecture and implementati
β’ on.Strong problem-solving skills and ability to work in Agile/Scrum environmen
ts.
Preferred Qualificat
β’ ionsExperience in BFSI / Wealth Management dom
β’ ain.AWS Certified Data Analytics β Specialty or AWS Solutions Architect certificat
β’ ion.Familiarity with CI/CD pipelines for data engineering (CodePipeline, Jenkins, GitHub Actio
β’ ns).Knowledge of BI/Visualization tools like Tableau, Power BI, QuickSi
ght.
Educ
β’ ationBachelorβs degree in Computer Science, Information Technology, Engineering, or related f
β’ ield.Masterβs degree prefe
rred.