Data Engineer - W2 (Onsite Interview)

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," requiring onsite work in "PA/TX/NC." Key skills include Python, SQL, Spark, AWS services, and experience with trading data.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
-
๐Ÿ—“๏ธ - Date discovered
September 3, 2025
๐Ÿ•’ - Project duration
Unknown
-
๐Ÿ๏ธ - Location type
On-site
-
๐Ÿ“„ - Contract type
W2 Contractor
-
๐Ÿ”’ - Security clearance
Unknown
-
๐Ÿ“ - Location detailed
Charlotte, NC
-
๐Ÿง  - Skills detailed
#SNS (Simple Notification Service) #IAM (Identity and Access Management) #Presto #AWS (Amazon Web Services) #Spark (Apache Spark) #Distributed Computing #"ETL (Extract #Transform #Load)" #Amazon Redshift #Libraries #PySpark #Pandas #Schema Design #Unit Testing #Visualization #PostgreSQL #Athena #Scala #Databases #Programming #RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #NumPy #Lambda (AWS Lambda) #NoSQL #Data Manipulation #SQL (Structured Query Language) #VPC (Virtual Private Cloud) #Data Engineering #DynamoDB #Data Pipeline #Big Data #SQS (Simple Queue Service) #Redshift #Normalization #Tableau #Cloud #Python
Role description
Only W2 Onsite interview at any of client locations (PA/TX/NC). Job Duties Responsibilities As an Engineer on the full stack team, you will be dedicated to help design, implement, and maintain a modern, robust, and scalable platform which will enable the Operational Risk team to meet the increasing demands from the various trading desks. Job Requirements Qualifications Proficiency in Python programming Strong expertise in SQL, Presto, HIVE, and Spark Knowledge of trading and investment data Experience in big data technologies such as Spark and developing distributed computing applications using PySpark Experience with libraries for data manipulation and analysis, such as Pandas, Polars and NumPy Understanding of data pipelines, ETL processes, and data warehousing concepts Strong experience in building and orchestrating data pipelines Experience in building APIs Write, maintain, and execute automated unit tests using Python xperience in building APIs Write, maintain, and execute automated unit tests using Python Follow Test-Driven Development (TDD) practices in all stages of software development Extensive experience with key AWS services/components including EMR, Lambda, Glue ETL, Step Functions, S3, ECS, Kinesis, IAM, RDS PostgreSQL, Dynamodb, Timeseries database, CloudWatch Events/Event Bridge, Athena, SNS, SQS, and VPC Proficiency in developing serverless architectures using AWS services Experience with both relational and NoSQL databases Skills in designing and implementing data models, including normalization, denormalization, and schema design Knowledge of data warehousing solutions like Amazon Redshift Strong analytical skills with the ability to troubleshoot data issues Good understanding of source control, unit testing, test-driven development, and CI/CD Ability to write clean, maintainable code and comprehend code written by others Strong communication skills Proficiency in data visualization tools and ability to create visual representations of data, particularlyย usingย Tableau