Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a 6-month contract in Charlotte, NC (Hybrid). Key skills include Python, SQL, AWS (Lambdas, Glue/PySpark), and big data technologies. Knowledge of trading data and data warehousing is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Presto #Normalization #Data Pipeline #Spark (Apache Spark) #Programming #Schema Design #Pandas #Visualization #PostgreSQL #S3 (Amazon Simple Storage Service) #Tableau #Data Engineering #NoSQL #AWS (Amazon Web Services) #Data Manipulation #Libraries #Redshift #Databases #SQS (Simple Queue Service) #RDS (Amazon Relational Database Service) #Distributed Computing #Unit Testing #DynamoDB #Scala #IAM (Identity and Access Management) #Athena #Cloud #Python #"ETL (Extract #Transform #Load)" #Big Data #NumPy #SNS (Simple Notification Service) #Lambda (AWS Lambda) #VPC (Virtual Private Cloud) #SQL (Structured Query Language) #Amazon Redshift #PySpark
Role description
Sr. Data Engineer Charlotte, NC (Hybrid) ( Locals only ) Visa : GC , GC EAD , H4 EAD and USC 6months contract, potential extension. Responsibilities β€’ As an Engineer on the full stack team, you will be dedicated to help design, implement, and maintain a modern, robust, and scalable platform which will enable the Operational Risk team to meet the increasing demands from the various trading desks. Level 4 Data Engineer Key Skills – Python SQL AWS – Lambdas, Glue/Pyspark, ECS 1 round of interviews – Onstite interview in Charlotte. They want to have someone selected by the end of next week. Qualifications β€’ Proficiency in Python programming β€’ Strong expertise in SQL, Presto, HIVE, and Spark β€’ Knowledge of trading and investment data β€’ Experience in big data technologies such as Spark and developing distributed computing applications using PySpark β€’ Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy β€’ Understanding of data pipelines, ETL processes, and data warehousing concepts β€’ Strong experience in building and orchestrating data pipelines β€’ Experience in building APIs Write, maintain, and execute automated unit tests using Python β€’ Follow Test-Driven Development (TDD) practices in all stages of software development β€’ Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Timeseries database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC β€’ Proficiency in developing serverless architectures using AWS services β€’ Experience with both relational and NoSQL databases β€’ Skills in designing and implementing data models, including normalization, denormalization, and schema design β€’ Knowledge of data warehousing solutions like Amazon Redshift β€’ Strong analytical skills with the ability to troubleshoot data issues β€’ Good understanding of source control, unit testing, test-driven development, and CI/CD β€’ Ability to write clean, maintainable code and comprehend code written by others β€’ Strong communication skills β€’ Proficiency in data visualization tools and ability to create visual representations of data, particularly using Tableau