Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Candidates must be proficient in Python, SQL, AWS services, and have experience with trading data and big data technologies.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Malvern, PA
-
🧠 - Skills detailed
#Presto #Normalization #Data Pipeline #Spark (Apache Spark) #Programming #Schema Design #Pandas #Visualization #PostgreSQL #S3 (Amazon Simple Storage Service) #Tableau #Data Engineering #NoSQL #AWS (Amazon Web Services) #Data Manipulation #Libraries #Redshift #Databases #SQS (Simple Queue Service) #RDS (Amazon Relational Database Service) #Distributed Computing #Unit Testing #DynamoDB #IAM (Identity and Access Management) #Athena #Cloud #Python #"ETL (Extract #Transform #Load)" #Big Data #NumPy #SNS (Simple Notification Service) #Lambda (AWS Lambda) #VPC (Virtual Private Cloud) #SQL (Structured Query Language) #Amazon Redshift #PySpark
Role description
Title : Senior Data Engineer Location : Charlotte NC, Dallas TX or Malvern PA Note: This role requires candidates to attend an onsite interview in Charlotte, NC; Dallas, TX; or Malvern, PA. Job Description Key Skills: Python SQL AWS – Lambdas, Glue/Pyspark, ECS Qualifications: β€’ Proficiency in Python programming β€’ Strong expertise in SQL, Presto, HIVE, and Spark β€’ Knowledge of trading and investment data β€’ Experience in big data technologies such as Spark and developing distributed computing applications using PySpark β€’ Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy β€’ Understanding of data pipelines, ETL processes, and data warehousing concepts β€’ Strong experience in building and orchestrating data pipelines β€’ Experience in building APIs Write, maintain, and execute automated unit tests using Python β€’ Follow Test-Driven Development (TDD) practices in all stages of software development β€’ Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Timeseries database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC β€’ Proficiency in developing serverless architectures using AWS services β€’ Experience with both relational and NoSQL databases β€’ Skills in designing and implementing data models, including normalization, denormalization, and schema design β€’ Knowledge of data warehousing solutions like Amazon Redshift β€’ Strong analytical skills with the ability to troubleshoot data issues β€’ Good understanding of source control, unit testing, test-driven development, and CI/CD β€’ Ability to write clean, maintainable code and comprehend code written by others β€’ Strong communication skills β€’ Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau Note:This role requires candidates to attend an onsite interview in Charlotte, NC; Dallas, TX; or Malvern, PA.