Meritore Technologies

Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer on a long-term contract in Charlotte, NC, offering a competitive pay rate. Key skills required include AWS, Snowflake, Python, PySpark, and MongoDB. Strong communication and leadership abilities are essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 9, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Data Pipeline #Scala #Deployment #Spark (Apache Spark) #Data Architecture #PySpark #Python #Snowflake #Data Modeling #Documentation #Data Ingestion #SQL (Structured Query Language) #Monitoring #MongoDB #Cloud #"ETL (Extract #Transform #Load)" #AWS Glue #AWS (Amazon Web Services) #IICS (Informatica Intelligent Cloud Services) #Informatica #Data Engineering #Scripting
Role description
We are looking for a Lead Data Engineer with strong communication skills and hands-on experience across Snowflake, AWS, Python, PySpark, MongoDB, and IICS. This role requires a technical leader who can guide a small engineering team while also building and optimizing scalable data pipelines in a cloud environment. Long-term Contract Location: Charlotte, NC 4 Days Onsite β€’ β€’ β€’ Interviews are actively happening β€’ β€’ β€’ β€’ β€’ β€’ If you are interested in this role, please share your Updated resume to proceed further β€’ β€’ β€’ Responsibilities β€’ Lead and mentor a team of data engineers in day-to-day project delivery β€’ Design, build, and optimize ETL/ELT pipelines using AWS Glue, Python, PySpark, and Snowflake β€’ Work with business and technical stakeholders, deliver updates, and ensure smooth communication β€’ Develop and maintain data workflows using IICS (Informatica Intelligent Cloud Services) β€’ Manage data ingestion from multiple sources, including MongoDB and AWS services β€’ Perform data modeling, SQL scripting, and performance tuning in Snowflake β€’ Support deployment, monitoring, and troubleshooting of data pipelines β€’ Ensure best practices for code quality, documentation, and cloud data architecture