

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include AWS services, Python, Spark, and shell scripting. Experience in investment/finance is preferred. A bachelor's degree in computer science is required; a master's degree is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Datasets #Spark (Apache Spark) #Data Storage #Data Processing #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Amazon EMR (Amazon Elastic MapReduce) #Big Data #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Lambda (AWS Lambda) #Scripting #Databases #Data Ingestion #Monitoring #Data Lake #AWS Lambda #Python #AWS Glue #Linux #Data Pipeline #Scala #Computer Science #Redshift #Data Science #Documentation #Shell Scripting #Data Engineering #Storage
Role description
This is W2 role.
Responsibilities
β’ The person on this role will play a crucial role in building scalable and cost-effective data pipelines, data lakes, and analytics systems.
β’ Data Ingestion: Implement data ingestion processes to collect data from various sources, including databases, streaming data, and external APIs.
β’ Data Transformation: Develop ETL (Extract, Transform, Load) processes to transform and cleanse raw data into a structured and usable format for analysis.
β’ Data Storage: Manage and optimize data storage solutions, including Amazon S3, Redshift, and other AWS storage services.
β’ Data Processing: Utilize AWS services like AWS Glue, Amazon EMR, and AWS Lambda to process and analyze large datasets.
β’ Data Monitoring and Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, cost-efficiency, and scalability.
β’ Integration: Collaborate with data scientists, analysts, and other stakeholders to integrate AWS-based solutions into data analytics and reporting platforms.
β’ Documentation: Maintain thorough documentation of data engineering processes, data flows, and system configurations.
β’ Scalability: Design AWS-based solutions that can scale to accommodate growing data volumes and changing business requirements.
β’ Cost Management: Implement cost-effective solutions by optimizing resource usage and recommending cost-saving measures.
β’ Troubleshooting: Diagnose and resolve AWS-related issues to minimize downtime and disruptions.
Qualifications: Must Have:
β’ Strong experience with AWS services on big data platforms.
β’ Strong Python experience in building Dash application in AWS
β’ Spark
β’ Shell scripting and Linux knowledge
Nice to have:
β’ Investment Knowledge preferred
β’ Looking for candidates who have worked in Investment/Finance space
β’ Machine Learning Skills
Qualifications:
β’ Educational Background: A bachelor's degree in computer science, information technology, or a related field is typically required.
β’ A master's degree is preferred with GPA of 3.8+
Problem-Solving:
Strong analytical and problem-solving skills to address complex data engineering challenges.
This is W2 role.
Responsibilities
β’ The person on this role will play a crucial role in building scalable and cost-effective data pipelines, data lakes, and analytics systems.
β’ Data Ingestion: Implement data ingestion processes to collect data from various sources, including databases, streaming data, and external APIs.
β’ Data Transformation: Develop ETL (Extract, Transform, Load) processes to transform and cleanse raw data into a structured and usable format for analysis.
β’ Data Storage: Manage and optimize data storage solutions, including Amazon S3, Redshift, and other AWS storage services.
β’ Data Processing: Utilize AWS services like AWS Glue, Amazon EMR, and AWS Lambda to process and analyze large datasets.
β’ Data Monitoring and Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, cost-efficiency, and scalability.
β’ Integration: Collaborate with data scientists, analysts, and other stakeholders to integrate AWS-based solutions into data analytics and reporting platforms.
β’ Documentation: Maintain thorough documentation of data engineering processes, data flows, and system configurations.
β’ Scalability: Design AWS-based solutions that can scale to accommodate growing data volumes and changing business requirements.
β’ Cost Management: Implement cost-effective solutions by optimizing resource usage and recommending cost-saving measures.
β’ Troubleshooting: Diagnose and resolve AWS-related issues to minimize downtime and disruptions.
Qualifications: Must Have:
β’ Strong experience with AWS services on big data platforms.
β’ Strong Python experience in building Dash application in AWS
β’ Spark
β’ Shell scripting and Linux knowledge
Nice to have:
β’ Investment Knowledge preferred
β’ Looking for candidates who have worked in Investment/Finance space
β’ Machine Learning Skills
Qualifications:
β’ Educational Background: A bachelor's degree in computer science, information technology, or a related field is typically required.
β’ A master's degree is preferred with GPA of 3.8+
Problem-Solving:
Strong analytical and problem-solving skills to address complex data engineering challenges.