

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10+ years of experience, including 5+ years in Python and 3+ years in PySpark. It is a 100% on-site position in McLean, VA, with a focus on AWS and data pipeline optimization.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 21, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Python #Consulting #PySpark #AWS (Amazon Web Services) #Security #Data Architecture #Kafka (Apache Kafka) #Data Science #Data Engineering #Lambda (AWS Lambda) #"ETL (Extract #Transform #Load)" #Redshift #Spark (Apache Spark) #Batch #Data Modeling #Data Processing #Compliance #Consul #Scala #Snowflake #Data Governance #S3 (Amazon Simple Storage Service) #Airflow #Data Pipeline #Data Integrity
Role description
About Us: CirrusLabs is a leading consulting firm based in Alpharetta, GA, specializing in delivering innovative technical solutions to clients across various industries. We are committed to excellence, agility, and exceeding customer expectations.
About the Role
CirrusLabs, in partnership with Unisys, is seeking a Senior Data Engineer to support our end client, Freddie Mac, in McLean, VA. This role requires 10+ years of hands-on experience in data engineering and is a 100% on-site position. The selected candidate will work directly with business stakeholders and engineering teams to build scalable, secure, and high-performing data pipelines and platforms.
Must-Have Technical Skills
β’ β
10+ years of overall software/data engineering experience
β’ β
5+ years of experience working with Python in a production environment
β’ β
3+ years of experience with PySpark and distributed data processing
β’ β
Strong experience working with AWS services (especially S3, Glue, EMR, Lambda, Redshift)
β’ β
Proven ability to build and optimize ETL pipelines and batch/streaming data solutions
β’ β
Deep understanding of data modeling, data architecture, and performance tuning.
Nice-to-Have Skills
β’ Experience with CI/CD practices for data workflows
β’ Familiarity with data governance and compliance best practices
β’ Experience working in financial services or with GSE clients
β’ Exposure to tools like Airflow, Kafka, or Snowflake
Key Responsibilities
β’ Design and implement scalable data pipelines using Python, PySpark, and AWS
β’ Work collaboratively with data scientists, analysts, and business partners
β’ Optimize existing data workflows for performance and reliability
β’ Ensure data integrity, quality, and security across all platforms
β’ Troubleshoot and resolve production issues in a timely manner
Work Authorization
β’ Must be authorized to work in the United States
β’ H1-B and other visa holders are welcome, but local candidates preferred due to on-site requirement
Location Requirement
β’ 100% On-Site in McLean, VA
β’ No remote or hybrid option available
About Us: CirrusLabs is a leading consulting firm based in Alpharetta, GA, specializing in delivering innovative technical solutions to clients across various industries. We are committed to excellence, agility, and exceeding customer expectations.
About the Role
CirrusLabs, in partnership with Unisys, is seeking a Senior Data Engineer to support our end client, Freddie Mac, in McLean, VA. This role requires 10+ years of hands-on experience in data engineering and is a 100% on-site position. The selected candidate will work directly with business stakeholders and engineering teams to build scalable, secure, and high-performing data pipelines and platforms.
Must-Have Technical Skills
β’ β
10+ years of overall software/data engineering experience
β’ β
5+ years of experience working with Python in a production environment
β’ β
3+ years of experience with PySpark and distributed data processing
β’ β
Strong experience working with AWS services (especially S3, Glue, EMR, Lambda, Redshift)
β’ β
Proven ability to build and optimize ETL pipelines and batch/streaming data solutions
β’ β
Deep understanding of data modeling, data architecture, and performance tuning.
Nice-to-Have Skills
β’ Experience with CI/CD practices for data workflows
β’ Familiarity with data governance and compliance best practices
β’ Experience working in financial services or with GSE clients
β’ Exposure to tools like Airflow, Kafka, or Snowflake
Key Responsibilities
β’ Design and implement scalable data pipelines using Python, PySpark, and AWS
β’ Work collaboratively with data scientists, analysts, and business partners
β’ Optimize existing data workflows for performance and reliability
β’ Ensure data integrity, quality, and security across all platforms
β’ Troubleshoot and resolve production issues in a timely manner
Work Authorization
β’ Must be authorized to work in the United States
β’ H1-B and other visa holders are welcome, but local candidates preferred due to on-site requirement
Location Requirement
β’ 100% On-Site in McLean, VA
β’ No remote or hybrid option available