

Python Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Developer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Python, PySpark, SQL, and AWS services. Experience in cloud-native environments and big data is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#Infrastructure as Code (IaC) #Cloud #Version Control #Lambda (AWS Lambda) #GitLab #SQL (Structured Query Language) #Scala #Python #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Spark (Apache Spark) #Terraform #Data Science #Big Data #SNS (Simple Notification Service) #PySpark #"ETL (Extract #Transform #Load)" #Data Engineering #Data Pipeline #Data Manipulation #SQL Queries #DevOps #SQS (Simple Queue Service)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Summary:
We are seeking a highly skilled Data Engineer with a strong background in Python, PySpark, and SQL to join our growing team. The ideal candidate will bring hands-on expertise in developing and deploying data solutions using core AWS services. This role is critical for designing scalable data pipelines, optimizing data workflows, and supporting our cloud-based infrastructure.
Key Responsibilities:
β’ Develop, test, and maintain robust data pipelines using Python and PySpark.
β’ Design and implement scalable solutions using AWS services including Lambda, S3, Glue, EMR, Step Functions, SNS, and SQS.
β’ Write complex SQL queries to extract, transform, and analyze data across multiple sources.
β’ Leverage GitLab for source control and CI/CD processes.
β’ Use Terraform for infrastructure as code and environment provisioning.
β’ Collaborate with cross-functional teams including data scientists, analysts, and DevOps engineers.
Required Qualifications:
β’ Strong proficiency in Python and PySpark for data engineering tasks.
β’ Advanced SQL skills for data manipulation and transformation.
β’ Hands-on experience with AWS services: Lambda, S3, Glue, EMR, Step Functions, SNS, and SQS.
β’ Experience with version control systems, preferably GitLab.
β’ Familiarity with infrastructure as code using Terraform.
Preferred Qualifications:
β’ Experience working in a cloud-native environment.
β’ Strong problem-solving skills and the ability to work independently or as part of a team.
β’ Background in big data and distributed systems is a plus.
We are an Equal Opportunity Employer committed to a diverse and inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, gender identity, national origin, disability, or veteran status. We value diverse perspectives and actively seek to create an inclusive environment that celebrates the unique qualities of all employees.