

Wise Equation Solutions Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in McLean, VA, offering a contract of unspecified length with a pay rate of "unknown". Candidates must have 7+ years of experience, strong skills in Python, PySpark, AWS, and data engineering in enterprise environments.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 3, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Science #Data Processing #ML (Machine Learning) #AWS (Amazon Web Services) #Data Warehouse #GIT #PySpark #Data Modeling #Python #Redshift #Logging #Compliance #Data Engineering #Scala #Data Architecture #Data Pipeline #S3 (Amazon Simple Storage Service) #Data Lake #Data Quality #Cloud #Security #Spark (Apache Spark) #Computer Science #Automation #Apache Spark #IAM (Identity and Access Management) #Monitoring #SQL (Structured Query Language)
Role description
Job Title: Data Engineer
Location: McLean VA β Onsite interview
Roles and Responsibilities:
Freddie Mac- Data Engineer- McLean VA- glider pre-screen + onsite interview
Skill Highlights- please indicate the # of years on each of the following skills:
β’ Python
β’ PySpark and Apache Spark
β’ Experience building scalable distributed data systems.
β’ ETL design patterns
β’ Experience with SQL
β’ Data modeling
β’ Familiarity with Git-based workflows
AWS environments:
β’ S3
β’ EMR
β’ Glue
β’ Lambda
β’ IAM
β’ Redshift (preferred)
Position Overview:
β’ Client is seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data pipelines and analytics platforms supporting enterprise data initiatives.
β’ This role requires strong hands-on expertise in Python, PySpark, Spark, and AWS to support large-scale data processing, transformation, and cloud-native data architectures.
β’ The ideal candidate will have deep experience in distributed data systems, cloud engineering, and building reliable, high-performance data solutions in regulated enterprise environments.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Python and PySpark
β’ Build and optimize distributed data processing frameworks using Apache Spark
β’ Engineer and manage cloud-based data solutions on AWS
β’ Develop ETL/ELT workflows to process structured and semi-structured data
β’ Optimize Spark jobs for performance, scalability, and cost efficiency
β’ Implement data quality checks, logging, and monitoring mechanisms
β’ Support data lake and data warehouse integrations
β’ Collaborate with Data Scientists, Analysts, and Engineering teams to support analytics and ML workloads.
β’ Ensure compliance with enterprise governance, security, and data standards
β’ Participate in CI/CD automation for data pipelines
Required Skills:
β’ Strong proficiency in Python
β’ Hands-on experience with PySpark and Apache Spark
β’ Experience building scalable distributed data systems.
β’ Strong understanding of ETL design patterns
β’ Experience with SQL and data modeling
β’ Familiarity with Git-based workflows
Extensive experience working in AWS environments:
β’ S3
β’ EMR
β’ Glue
β’ Lambda
β’ IAM
β’ Redshift (preferred)
Qualifications:
β’ Bachelorβs degree in Computer Science, Engineering, Information Systems, or related field.
β’ 7 plus years of data engineering experience
β’ Experience working in enterprise-scale environments.
Job Title: Data Engineer
Location: McLean VA β Onsite interview
Roles and Responsibilities:
Freddie Mac- Data Engineer- McLean VA- glider pre-screen + onsite interview
Skill Highlights- please indicate the # of years on each of the following skills:
β’ Python
β’ PySpark and Apache Spark
β’ Experience building scalable distributed data systems.
β’ ETL design patterns
β’ Experience with SQL
β’ Data modeling
β’ Familiarity with Git-based workflows
AWS environments:
β’ S3
β’ EMR
β’ Glue
β’ Lambda
β’ IAM
β’ Redshift (preferred)
Position Overview:
β’ Client is seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable data pipelines and analytics platforms supporting enterprise data initiatives.
β’ This role requires strong hands-on expertise in Python, PySpark, Spark, and AWS to support large-scale data processing, transformation, and cloud-native data architectures.
β’ The ideal candidate will have deep experience in distributed data systems, cloud engineering, and building reliable, high-performance data solutions in regulated enterprise environments.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Python and PySpark
β’ Build and optimize distributed data processing frameworks using Apache Spark
β’ Engineer and manage cloud-based data solutions on AWS
β’ Develop ETL/ELT workflows to process structured and semi-structured data
β’ Optimize Spark jobs for performance, scalability, and cost efficiency
β’ Implement data quality checks, logging, and monitoring mechanisms
β’ Support data lake and data warehouse integrations
β’ Collaborate with Data Scientists, Analysts, and Engineering teams to support analytics and ML workloads.
β’ Ensure compliance with enterprise governance, security, and data standards
β’ Participate in CI/CD automation for data pipelines
Required Skills:
β’ Strong proficiency in Python
β’ Hands-on experience with PySpark and Apache Spark
β’ Experience building scalable distributed data systems.
β’ Strong understanding of ETL design patterns
β’ Experience with SQL and data modeling
β’ Familiarity with Git-based workflows
Extensive experience working in AWS environments:
β’ S3
β’ EMR
β’ Glue
β’ Lambda
β’ IAM
β’ Redshift (preferred)
Qualifications:
β’ Bachelorβs degree in Computer Science, Engineering, Information Systems, or related field.
β’ 7 plus years of data engineering experience
β’ Experience working in enterprise-scale environments.






