

MM International, LLC
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Richmond, VA / McLean, VA, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python and AWS Infrastructure, with experience in enterprise environments and big data pipelines required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McLean, VA
-
🧠 - Skills detailed
#Data Warehouse #Data Processing #Lambda (AWS Lambda) #Data Engineering #S3 (Amazon Simple Storage Service) #Scala #Security #Data Architecture #Airflow #AWS (Amazon Web Services) #PySpark #Debugging #Cloud #Big Data #EC2 #Spark (Apache Spark) #Redshift #Python #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #Data Lake #Datasets #IAM (Identity and Access Management) #SQL (Structured Query Language) #Terraform #Data Pipeline
Role description
Job Description
Job Title: Big Data Engineer
Location: Richmond, VA / McLean, VA (Onsite)
We are seeking a Big Data Engineer with strong hands-on experience in Python and AWS Infrastructure to support the design, development, and maintenance of scalable data platforms and cloud-based data solutions. The ideal candidate will have prior experience working in enterprise environments and building robust big data pipelines using AWS-native services and infrastructure best practices.
Responsibilities
• Design, build, and maintain scalable big data pipelines and cloud-based data platforms on AWS.
• Develop and optimize data processing solutions using Python for large and complex datasets.
• Work with AWS infrastructure components such as S3, EMR, Glue, Redshift, EC2, IAM, and related cloud services.
• Support data lake, ETL, and distributed data processing workflows in a production environment.
• Monitor data pipeline performance, troubleshoot issues, and ensure system reliability, scalability, and security.
• Collaborate with cross-functional teams including data engineers, architects, developers, and business stakeholders to deliver data solutions.
Required Skills
• Strong experience with Python development for data engineering and large-scale data processing.
• Hands-on experience with AWS Infrastructure and AWS cloud services.
• Experience building and supporting big data or data warehouse/data lake solutions.
• Good understanding of ETL, distributed systems, and cloud-based data architecture.
• Experience with AWS services such as S3, EMR, Glue, Redshift, EC2, IAM, Lambda, or CloudWatch is preferred.
• Strong problem-solving, debugging, and performance tuning skills.
Must Have
• Python
• AWS Infrastructure
• Experience working with enterprise-level environments, preferably from strong end-client backgrounds.
Preferred Qualifications
• Experience with Spark, PySpark, Airflow, SQL, or data pipeline orchestration tools.
• Experience with Infrastructure as Code tools such as Terraform or CloudFormation.
• Prior experience in financial services or other large-scale enterprise environments is a plus.
Equal Opportunity Statement
We are committed to diversity and inclusivity.
Job Description
Job Title: Big Data Engineer
Location: Richmond, VA / McLean, VA (Onsite)
We are seeking a Big Data Engineer with strong hands-on experience in Python and AWS Infrastructure to support the design, development, and maintenance of scalable data platforms and cloud-based data solutions. The ideal candidate will have prior experience working in enterprise environments and building robust big data pipelines using AWS-native services and infrastructure best practices.
Responsibilities
• Design, build, and maintain scalable big data pipelines and cloud-based data platforms on AWS.
• Develop and optimize data processing solutions using Python for large and complex datasets.
• Work with AWS infrastructure components such as S3, EMR, Glue, Redshift, EC2, IAM, and related cloud services.
• Support data lake, ETL, and distributed data processing workflows in a production environment.
• Monitor data pipeline performance, troubleshoot issues, and ensure system reliability, scalability, and security.
• Collaborate with cross-functional teams including data engineers, architects, developers, and business stakeholders to deliver data solutions.
Required Skills
• Strong experience with Python development for data engineering and large-scale data processing.
• Hands-on experience with AWS Infrastructure and AWS cloud services.
• Experience building and supporting big data or data warehouse/data lake solutions.
• Good understanding of ETL, distributed systems, and cloud-based data architecture.
• Experience with AWS services such as S3, EMR, Glue, Redshift, EC2, IAM, Lambda, or CloudWatch is preferred.
• Strong problem-solving, debugging, and performance tuning skills.
Must Have
• Python
• AWS Infrastructure
• Experience working with enterprise-level environments, preferably from strong end-client backgrounds.
Preferred Qualifications
• Experience with Spark, PySpark, Airflow, SQL, or data pipeline orchestration tools.
• Experience with Infrastructure as Code tools such as Terraform or CloudFormation.
• Prior experience in financial services or other large-scale enterprise environments is a plus.
Equal Opportunity Statement
We are committed to diversity and inclusivity.






