Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 9, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
McLean, VA
-
🧠 - Skills detailed
#Data Ingestion #SQL (Structured Query Language) #Data Lake #GIT #SnowPipe #Automation #Data Processing #Snowflake #AWS (Amazon Web Services) #Data Warehouse #Spark (Apache Spark) #Scala #Scripting #Data Modeling #Python #Databases #Data Governance #Agile #Datasets #Cloud #Data Pipeline #Airflow #"ETL (Extract #Transform #Load)" #Version Control #PySpark #Redshift #Data Analysis #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Migration #EC2 #Scrum #IAM (Identity and Access Management) #Security #Data Engineering
Role description
Data Engineer McLean, VA - Onsite 12 months contract Job Summary We are looking for an experienced Data Engineer with strong expertise in Snowflake, PySpark, and AWS to design and implement scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, optimizing data workflows, and enabling advanced analytics by ensuring seamless integration across various data platforms. Key Responsibilities β€’ Design, develop, and maintain ETL/ELT pipelines using PySpark and AWS services. β€’ Work with Snowflake to design efficient schemas, optimize queries, and manage large-scale datasets. β€’ Develop scalable and reliable data ingestion and transformation workflows. β€’ Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality data solutions. β€’ Implement best practices for data governance, security, and performance tuning. β€’ Monitor, troubleshoot, and optimize data pipelines for performance and cost efficiency. β€’ Support migration and integration of data from legacy systems to Snowflake on AWS. Required Skills & Qualifications β€’ Strong hands-on experience with Snowflake (data modeling, query optimization, Snowpipe, tasks, streams). β€’ Proficiency in PySpark for large-scale data processing and transformations. β€’ In-depth knowledge of AWS cloud services (S3, Glue, Lambda, EMR, Redshift, EC2, IAM). β€’ Strong SQL skills and experience with performance tuning in Snowflake/relational databases. β€’ Experience building and managing data pipelines in production environments. β€’ Familiarity with CI/CD pipelines and version control (Git). β€’ Excellent problem-solving, analytical, and communication skills. Preferred Qualifications β€’ Experience with orchestration tools (Airflow, Step Functions, or similar). β€’ Exposure to data lake and data warehouse architectures. β€’ Knowledge of Python for automation and scripting. β€’ Experience with Agile/Scrum methodologies.