Santcore Technologies

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Remote, Northern Virginia) with a contract length of "unknown." Pay rate is "unknown." Requires USC or GC, a Bachelor's degree with eight years or a Master's with six years of experience, proficiency in Python, PySpark, and AWS services.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 23, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Northern Virginia, VA
-
🧠 - Skills detailed
#Deployment #"ETL (Extract #Transform #Load)" #Agile #AWS Glue #Big Data #Apache Spark #PySpark #Monitoring #DynamoDB #Computer Science #Kafka (Apache Kafka) #Data Modeling #Data Engineering #Scrum #Cloud #Data Processing #Python #Lambda (AWS Lambda) #Oracle #Redshift #Scala #Continuous Deployment #Kanban #Hadoop #Spark (Apache Spark) #Data Manipulation #Data Science #SQL (Structured Query Language) #Datasets #Data Integrity #Database Performance #Schema Design #AWS (Amazon Web Services)
Role description
Senior Data Engineer Location: Northern Virginia (Remote) Status: USC or GC required Responsibilities include: β€’ Design, develop, and implement robust ETL solutions using Python and PySpark to extract, transform, and load data from various sources into AWS data services. β€’ Optimize ETL processes for performance and scalability utilizing AWS Glue, EMR, Step Functions, and Lambda to ensure efficient data processing and timely delivery. β€’ Ensure data integrity and quality throughout the ETL process by implementing thorough data validation checks and error handling mechanisms. β€’ Manage AWS services such as Glue, EMR, Step Functions, and Lambda, including configuration, monitoring, and troubleshooting to maintain operational excellence. β€’ Collaborate with cross-functional teams including data engineers, data scientists, and business stakeholders to understand data requirements and deliver tailored ETL solutions. β€’ Troubleshoot complex technical issues and provide advanced operational support to internal MITRE customers in AWS Required Qualifications: β€’ Bachelor’s degree with eight years’ related experience, or a master’s degree with six years’ related experience, preferably with a technical major such as engineering, computer science, etc. β€’ Self-motivated, curious, and collaborative, with a passion to learn new technologies and develop new skills β€’ Demonstrated experience in developing ETL pipelines using Python and PySpark, with a strong understanding of data processing techniques. β€’ Expertise in SQL for data manipulation, querying, and optimization to work with various database platforms including Postgres, DynamoDB, Oracle, and Redshift. β€’ Hands-on experience with AWS Glue, EMR, Step Functions, and Lambda for building and orchestrating ETL workflows in a cloud environment. β€’ Experience implementing Continuous Integration/Continuous Deployment (CI/CD) pipelines using AWS CDK or similar tools for automating deployment and testing of ETL solutions. Preferred Qualifications: β€’ Previous experience leading or mentoring a team of developers/engineers in a collaborative environment. β€’ AWS certifications such as AWS Certified Developer or AWS Certified Solutions Architect, demonstrating proficiency in AWS services and best practices. β€’ Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka for processing large-scale datasets. β€’ Experience in data modeling and schema design for optimizing database performance and scalability. β€’ Experience working in Agile development methodologies, such as Scrum or Kanban, for iterative and collaborative project delivery.