Senior AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 9, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Documentation #Data Lake #GIT #CLI (Command-Line Interface) #Bash #AWS (Amazon Web Services) #Data Processing #Snowflake #Data Warehouse #ML (Machine Learning) #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Scala #Jupyter #Python #PostgreSQL #Big Data #Databases #Datasets #Cloud #Data Quality #Data Pipeline #Pandas #Terraform #Data Orchestration #Programming #Airflow #"ETL (Extract #Transform #Load)" #PySpark #Redshift #AWS CLI (Amazon Web Services Command Line Interface) #Apache Airflow #Linux #Docker #S3 (Amazon Simple Storage Service) #MySQL #Lambda (AWS Lambda) #Unix #Code Reviews #Computer Science #Data Science #Normalization #Monitoring #Athena #IAM (Identity and Access Management) #Data Engineering #Security #Visual Studio #DevOps
Role description
Job Title: Senior AWS Data Engineer Experience: 10+ Years Employment Type: Full-Time Industry: Technology / Cloud / Financial Services / Healthcare / Retail About the Role We are seeking a Senior AWS Data Engineer with strong hands-on experience in building scalable, secure, and high-performance data pipelines on AWS. The ideal candidate will have a solid foundation in ETL frameworks, cloud platforms, and big data tools, with the ability to drive end-to-end data engineering solutions. Responsibilities • Design, develop, and maintain scalable ETL/ELT data pipelines using AWS-native and open-source tools. • Ingest, clean, and transform large volumes of structured and unstructured data. • Work with data lakes, data warehouses, and real-time data processing frameworks. • Develop and manage data models and implement best practices for data partitioning, compression, and optimization. • Collaborate with data scientists, analysts, and business stakeholders to deliver curated, high-quality datasets. • Implement and manage data quality checks, auditing, and monitoring processes. • Build and automate data orchestration workflows using tools like Apache Airflow or Step Functions. • Ensure data platforms are secure, compliant, and cost-optimized using services like IAM, KMS, S3, and Redshift. • Participate in code reviews, documentation, and mentoring junior engineers. Key Technologies & Tools • Cloud Platforms: AWS (S3, Glue, Redshift, Lambda, IAM, Athena, EMR, CloudFormation) • Programming: Python, SQL, Bash • ETL / Data Processing: Apache Airflow, Spark, Pandas • Databases: Snowflake, PostgreSQL, MySQL • DevOps / CI-CD: Git, Docker, Terraform (optional) • Other Tools: Linux/Unix, Jupyter, Visual Studio Code, AWS CLI Qualifications • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. • 6+ years of experience in data engineering or software development roles. • At least 3 years of deep experience in AWS-based data pipelines. • Strong SQL and Python skills; experience with PySpark is a plus. • Solid understanding of data warehousing, normalization, and modeling techniques. • Experience with data versioning, schema evolution, and performance tuning. • Working knowledge of cloud cost optimization, security, and governance. Preferred Skills (Nice to Have) • Experience with Snowflake or modern data platforms. • Knowledge of Kafka, Kinesis, or event-driven architectures. • Exposure to machine learning pipelines or data science workflows. Demonstrated ability to collaborate effectively within cross-functional teams to deliver scalable and reliable data solutions. Proven track record of leading initiatives to optimize data processing workflows and improve overall system efficiency.