

TalentBridge
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$X per hour." Key skills include Python, PySpark, SQL, AWS services, and ETL solutions. Experience in automation and big data handling is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Engineering #Big Data #AWS (Amazon Web Services) #SQL (Structured Query Language) #Automation #Python #PySpark #Integration Testing #Infrastructure as Code (IaC) #Lambda (AWS Lambda) #Spark (Apache Spark) #Redshift #Security #IAM (Identity and Access Management)
Role description
Job Overview
Responsibilities:
• Develop and enhance ETL pipelines used by UDP platform
• Contribute to the implementation of key requirements as defined in Phase 1 discovery.
• Automate implemented requirements as appropriate, including infrastructure as code and runbooks.
• Document processes and standard operating procedures.
• Ensure the platform meets defined requirements.
Skills/Technology Needed:
• Experienced with Python, PySpark and SQL
• Automation and CI/CD skills, unit and integration testing
• Familiarity with orchestration and ETL solutions
• Hands-on experience with AWS Services (Redshift, Glue, LakeFormation, Lambda)
• Understanding of AWS Least Privilege Security Model and AWS Security Model (IAM)
• Experience with Microsoft Office (Word, Excel, Powerpoint)
• Experience with handling data in Open Table formats (Iceberg, Parquet, Delta Format). Big Data Handling Understanding of such as Buckets and Partitioning
• (Lead) Understanding of PySpark, Glue job optimization techniques to improve operations
Job Overview
Responsibilities:
• Develop and enhance ETL pipelines used by UDP platform
• Contribute to the implementation of key requirements as defined in Phase 1 discovery.
• Automate implemented requirements as appropriate, including infrastructure as code and runbooks.
• Document processes and standard operating procedures.
• Ensure the platform meets defined requirements.
Skills/Technology Needed:
• Experienced with Python, PySpark and SQL
• Automation and CI/CD skills, unit and integration testing
• Familiarity with orchestration and ETL solutions
• Hands-on experience with AWS Services (Redshift, Glue, LakeFormation, Lambda)
• Understanding of AWS Least Privilege Security Model and AWS Security Model (IAM)
• Experience with Microsoft Office (Word, Excel, Powerpoint)
• Experience with handling data in Open Table formats (Iceberg, Parquet, Delta Format). Big Data Handling Understanding of such as Buckets and Partitioning
• (Lead) Understanding of PySpark, Glue job optimization techniques to improve operations






