Compunnel Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Charlotte, NC, onsite 3 days a week, with a long-term contract. Key skills include Python, PySpark, AWS services, SQL, and ETL development. Experience in data engineering and Agile methodologies is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 7, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Spark (Apache Spark) #Databases #Scrum #"ETL (Extract #Transform #Load)" #DevOps #SQL (Structured Query Language) #Unit Testing #Data Profiling #Sybase #Lambda (AWS Lambda) #Jenkins #SQL Server #AWS (Amazon Web Services) #Programming #Agile #Batch #PySpark #Oracle #RDS (Amazon Relational Database Service) #Data Integration #BitBucket #Business Analysis #Data Engineering #Python #REST (Representational State Transfer) #Automation #REST API #Classification #Data Mining #Scala #Data Analysis #S3 (Amazon Simple Storage Service) #Data Manipulation #Informatica #GitHub #DataStage #Triggers #Data Extraction #Data Warehouse #MDM (Master Data Management) #Kafka (Apache Kafka) #Unix
Role description
Job Description: Job Title: Data Engineer Location: Charlotte, NC (Onsite - 3 days a week) Duration – Long Term Job Description: β€’ The primary role of the Data Engineer is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the company’s objectives. β€’ They are responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python/Pyspark and AWS tools. Responsibilities: β€’ Provide scoping, estimating, planning, design, development, and support services to a project. β€’ Identify and develop the technical detail design document. β€’ Work with developers and business areas to design, configure, deploy, and maintain custom ETL Infrastructure to support project initiatives. β€’ Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading (ETL Pipelines) into large data domains. β€’ Document and present solution alternatives to clients, which support business processes and business objectives. β€’ Work with business analysts to understand and prioritize user requirements. β€’ Design, development, test, and implement application code. β€’ Follow proper software development lifecycle processes and standards. β€’ Quality Analysis of the products, responsible for the Defect tracking and Classification. β€’ Track progress and intervene as needed to eliminate barriers and ensure delivery. β€’ Resolve or escalate problems and manage risk for both development and production support. β€’ Coordinate vendors and contractors for specific projects or systems. β€’ Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies. Skills and Knowledge: β€’ Developer experience specifically focusing on Data Engineering. β€’ Hands-on experience in Development using Python and PySpark as an ETL tool. β€’ Experience in AWS services like Glue, Lambda, MSK (Kafka), S3, Step functions, RDS, EKS etc. β€’ Experience in Databases like Postgres, SQL Server, Oracle, Sybase etc. β€’ Experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions, and triggers. β€’ Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing). β€’ Knowledge in developing UNIX scripts, Oracle SQL/PL-SQL. β€’ Experience with data models, data mining, data analysis and data profiling. β€’ Experience in working with REST APIs. β€’ Experience in workload automation tools like Control-M, Autosys etc. β€’ Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins. β€’ Strong experience with Agile/SCRUM methodology. β€’ Experience with other ETL tools (DataStage, Informatica, Pentaho, etc.). β€’ Knowledge in MDM, Data warehouse and Data Analytics