Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10+ years of experience, including 5+ years in Python and 3+ years in PySpark. It is a 100% on-site position in McLean, VA, with a focus on AWS and data pipeline optimization.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
May 21, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
McLean, VA
-
🧠 - Skills detailed
#Python #Consulting #PySpark #AWS (Amazon Web Services) #Security #Data Architecture #Kafka (Apache Kafka) #Data Science #Data Engineering #Lambda (AWS Lambda) #"ETL (Extract #Transform #Load)" #Redshift #Spark (Apache Spark) #Batch #Data Modeling #Data Processing #Compliance #Consul #Scala #Snowflake #Data Governance #S3 (Amazon Simple Storage Service) #Airflow #Data Pipeline #Data Integrity
Role description
About Us: CirrusLabs is a leading consulting firm based in Alpharetta, GA, specializing in delivering innovative technical solutions to clients across various industries. We are committed to excellence, agility, and exceeding customer expectations. About the Role CirrusLabs, in partnership with Unisys, is seeking a Senior Data Engineer to support our end client, Freddie Mac, in McLean, VA. This role requires 10+ years of hands-on experience in data engineering and is a 100% on-site position. The selected candidate will work directly with business stakeholders and engineering teams to build scalable, secure, and high-performing data pipelines and platforms. Must-Have Technical Skills β€’ βœ… 10+ years of overall software/data engineering experience β€’ βœ… 5+ years of experience working with Python in a production environment β€’ βœ… 3+ years of experience with PySpark and distributed data processing β€’ βœ… Strong experience working with AWS services (especially S3, Glue, EMR, Lambda, Redshift) β€’ βœ… Proven ability to build and optimize ETL pipelines and batch/streaming data solutions β€’ βœ… Deep understanding of data modeling, data architecture, and performance tuning. Nice-to-Have Skills β€’ Experience with CI/CD practices for data workflows β€’ Familiarity with data governance and compliance best practices β€’ Experience working in financial services or with GSE clients β€’ Exposure to tools like Airflow, Kafka, or Snowflake Key Responsibilities β€’ Design and implement scalable data pipelines using Python, PySpark, and AWS β€’ Work collaboratively with data scientists, analysts, and business partners β€’ Optimize existing data workflows for performance and reliability β€’ Ensure data integrity, quality, and security across all platforms β€’ Troubleshoot and resolve production issues in a timely manner Work Authorization β€’ Must be authorized to work in the United States β€’ H1-B and other visa holders are welcome, but local candidates preferred due to on-site requirement Location Requirement β€’ 100% On-Site in McLean, VA β€’ No remote or hybrid option available