

Senior AWS Data Engineer - Contract - Charlotte, NC (Hybrid)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer on a long-term contract in Charlotte, NC (Hybrid). Key skills include ANSI SQL, Python 3.x, AWS Cloud, Kafka, and PySpark. Experience with scalable cloud solutions and data streaming is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 14, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#SQL (Structured Query Language) #Spark (Apache Spark) #Code Reviews #SQL Queries #AWS (Amazon Web Services) #PySpark #Data Engineering #Scala #Compliance #Security #Python #Cloud #Kafka (Apache Kafka)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Lorven Technologies, Inc., is seeking the following. Apply via Dice today!
Hi,
Our client is looking for a Senior AWS Data Engineer with a Long-Term Contract project in Charlotte, NC (Hybrid). below is the detailed requirement.
Position : Senior AWS Data Engineer
Location : Charlotte, NC (Hybrid)
Duration : Long-Term Contract
Required Skills : ANSI SQL, Python 3.x, AWS Cloud, Kafka, PySpark
Responsibilities:
β’ Lead the design and implementation of scalable and efficient cloud-based solutions using AWS Cloud and Pyspark.
β’ Should be able to develop highly scalable solutions using Spark, sparksql and distributed processing techniqies.
β’ Should have hands-on development experience in processing kafka streaming data.
β’ Oversee the development and maintenance of Python applications, ensuring high performance and responsiveness.
β’ Provide technical guidance and mentorship to team members, fostering a collaborative and innovative environment.
β’ Develop and optimize SQL queries to ensure efficient data retrieval and manipulation.
β’ Collaborate with cross-functional teams to define, design, and deliver new features and enhancements.
β’ Ensure the security and integrity of data within cloud environments, adhering to best practices and compliance requirements.
β’ Conduct code reviews and provide constructive feedback to maintain high code quality standards.
β’ Troubleshoot and resolve complex technical issues, ensuring minimal disruption to operations.
β’ Stay updated with the latest industry trends and technologies, integrating them into the teams workflow where applicable.
β’ Drive continuous improvement initiatives to enhance system performance, reliability, and scalability.
β’ Coordinate with stakeholders to gather requirements and translate them into technical specifications.
β’ Manage project timelines and deliverables, ensuring timely and successful completion of tasks.
β’ Document technical processes, procedures, and best practices for future reference and training purposes.