Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Architect position based in Charlotte, NC, on a contract basis. It requires 5+ years of experience in Data Engineering, strong Python and PySpark skills, and expertise in AWS services. Key responsibilities include designing scalable ETL/ELT pipelines and optimizing data architectures.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 24, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Programming #"ETL (Extract #Transform #Load)" #Monitoring #Delta Lake #Infrastructure as Code (IaC) #Cloud #Big Data #Terraform #Data Lake #Docker #Schema Design #Data Pipeline #Redshift #Kafka (Apache Kafka) #Python #ML (Machine Learning) #GIT #SQL (Structured Query Language) #Data Engineering #PySpark #Data Architecture #Batch #Data Science #Apache Iceberg #S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #Athena #Data Modeling #Datasets #Data Warehouse #Databricks #Scala #Data Processing #Kubernetes #Data Framework #Lambda (AWS Lambda) #AWS (Amazon Web Services)
Role description
Position: Data Architect Location: Charlotte, NC Hybrid Employment Type: Contract About the Role We are seeking a skilled Data Engineer / Data Architect with strong expertise in Python, PySpark, and AWS to design, build, and optimize scalable data pipelines and cloud-based data architectures. The ideal candidate will have experience working with large-scale datasets, cloud-native services, and modern data engineering practices. Key Responsibilities β€’ Design, develop, and maintain scalable ETL/ELT pipelines using Python and PySpark. β€’ Architect and optimize data lakes and data warehouses on AWS. β€’ Work with AWS services such as Glue, EMR, Redshift, S3, Lambda, Athena, Kinesis, and Step Functions. β€’ Ensure data reliability, quality, and governance across platforms. β€’ Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. β€’ Implement best practices for performance tuning, monitoring, and cost optimization. β€’ Support real-time and batch data processing frameworks. β€’ Provide architectural guidance and mentor junior data engineers. Required Skills & Qualifications β€’ 5+ years of experience in Data Engineering or Data Architecture roles. β€’ Strong programming skills in Python and hands-on experience with PySpark. β€’ Expertise in AWS cloud ecosystem (Glue, EMR, Redshift, S3, Lambda, Athena, etc.). β€’ Experience with data modeling, schema design, and SQL performance optimization. β€’ Knowledge of data lake, data warehouse, and big data frameworks. β€’ Familiarity with CI/CD pipelines, Git, and Infrastructure-as-Code (IaC) tools (CloudFormation/Terraform). β€’ Strong understanding of ETL/ELT processes and best practices. β€’ Excellent problem-solving, analytical, and communication skills. Nice to Have β€’ Experience with Delta Lake / Apache Iceberg / Databricks. β€’ Exposure to streaming frameworks (Kafka, Kinesis, Flink, Spark Streaming). β€’ Familiarity with containerization (Docker, Kubernetes, EKS). β€’ Background in machine learning pipelines or MLOps.