

Santcore Technologies
Senior Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Remote, Northern Virginia) with a contract length of "unknown." Pay rate is "unknown." Requires USC or GC, a Bachelor's degree with eight years or a Master's with six years of experience, proficiency in Python, PySpark, and AWS services.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
Unknown
-
๐๏ธ - Date
October 23, 2025
๐ - Duration
Unknown
-
๐๏ธ - Location
Remote
-
๐ - Contract
Unknown
-
๐ - Security
Unknown
-
๐ - Location detailed
Northern Virginia, VA
-
๐ง - Skills detailed
#Deployment #"ETL (Extract #Transform #Load)" #Agile #AWS Glue #Big Data #Apache Spark #PySpark #Monitoring #DynamoDB #Computer Science #Kafka (Apache Kafka) #Data Modeling #Data Engineering #Scrum #Cloud #Data Processing #Python #Lambda (AWS Lambda) #Oracle #Redshift #Scala #Continuous Deployment #Kanban #Hadoop #Spark (Apache Spark) #Data Manipulation #Data Science #SQL (Structured Query Language) #Datasets #Data Integrity #Database Performance #Schema Design #AWS (Amazon Web Services)
Role description
Senior Data Engineer
Location: Northern Virginia (Remote)
Status: USC or GC required
Responsibilities include:
โข Design, develop, and implement robust ETL solutions using Python and PySpark to extract, transform, and load data from various sources into AWS data services.
โข Optimize ETL processes for performance and scalability utilizing AWS Glue, EMR, Step Functions, and Lambda to ensure efficient data processing and timely delivery.
โข Ensure data integrity and quality throughout the ETL process by implementing thorough data validation checks and error handling mechanisms.
โข Manage AWS services such as Glue, EMR, Step Functions, and Lambda, including configuration, monitoring, and troubleshooting to maintain operational excellence.
โข Collaborate with cross-functional teams including data engineers, data scientists, and business stakeholders to understand data requirements and deliver tailored ETL solutions.
โข Troubleshoot complex technical issues and provide advanced operational support to internal MITRE customers in AWS
Required Qualifications:
โข Bachelorโs degree with eight yearsโ related experience, or a masterโs degree with six yearsโ related experience, preferably with a technical major such as engineering, computer science, etc.
โข Self-motivated, curious, and collaborative, with a passion to learn new technologies and develop new skills
โข Demonstrated experience in developing ETL pipelines using Python and PySpark, with a strong understanding of data processing techniques.
โข Expertise in SQL for data manipulation, querying, and optimization to work with various database platforms including Postgres, DynamoDB, Oracle, and Redshift.
โข Hands-on experience with AWS Glue, EMR, Step Functions, and Lambda for building and orchestrating ETL workflows in a cloud environment.
โข Experience implementing Continuous Integration/Continuous Deployment (CI/CD) pipelines using AWS CDK or similar tools for automating deployment and testing of ETL solutions.
Preferred Qualifications:
โข Previous experience leading or mentoring a team of developers/engineers in a collaborative environment.
โข AWS certifications such as AWS Certified Developer or AWS Certified Solutions Architect, demonstrating proficiency in AWS services and best practices.
โข Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka for processing large-scale datasets.
โข Experience in data modeling and schema design for optimizing database performance and scalability.
โข Experience working in Agile development methodologies, such as Scrum or Kanban, for iterative and collaborative project delivery.
Senior Data Engineer
Location: Northern Virginia (Remote)
Status: USC or GC required
Responsibilities include:
โข Design, develop, and implement robust ETL solutions using Python and PySpark to extract, transform, and load data from various sources into AWS data services.
โข Optimize ETL processes for performance and scalability utilizing AWS Glue, EMR, Step Functions, and Lambda to ensure efficient data processing and timely delivery.
โข Ensure data integrity and quality throughout the ETL process by implementing thorough data validation checks and error handling mechanisms.
โข Manage AWS services such as Glue, EMR, Step Functions, and Lambda, including configuration, monitoring, and troubleshooting to maintain operational excellence.
โข Collaborate with cross-functional teams including data engineers, data scientists, and business stakeholders to understand data requirements and deliver tailored ETL solutions.
โข Troubleshoot complex technical issues and provide advanced operational support to internal MITRE customers in AWS
Required Qualifications:
โข Bachelorโs degree with eight yearsโ related experience, or a masterโs degree with six yearsโ related experience, preferably with a technical major such as engineering, computer science, etc.
โข Self-motivated, curious, and collaborative, with a passion to learn new technologies and develop new skills
โข Demonstrated experience in developing ETL pipelines using Python and PySpark, with a strong understanding of data processing techniques.
โข Expertise in SQL for data manipulation, querying, and optimization to work with various database platforms including Postgres, DynamoDB, Oracle, and Redshift.
โข Hands-on experience with AWS Glue, EMR, Step Functions, and Lambda for building and orchestrating ETL workflows in a cloud environment.
โข Experience implementing Continuous Integration/Continuous Deployment (CI/CD) pipelines using AWS CDK or similar tools for automating deployment and testing of ETL solutions.
Preferred Qualifications:
โข Previous experience leading or mentoring a team of developers/engineers in a collaborative environment.
โข AWS certifications such as AWS Certified Developer or AWS Certified Solutions Architect, demonstrating proficiency in AWS services and best practices.
โข Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka for processing large-scale datasets.
โข Experience in data modeling and schema design for optimizing database performance and scalability.
โข Experience working in Agile development methodologies, such as Scrum or Kanban, for iterative and collaborative project delivery.






