Compunnel Inc.

API Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an API Data Engineer with 14+ years of experience, offering a contract position in Fort Mill, SC, or New York. Key skills include C#, .Net, Python, AWS, ETL, and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Data Science #Big Data #PySpark #AWS (Amazon Web Services) #Datasets #Scala #"ETL (Extract #Transform #Load)" #Snowflake #Airflow #API (Application Programming Interface) #Lambda (AWS Lambda) #C# #Spark (Apache Spark) #Data Access #Data Engineering #Data Pipeline #Aurora #Cloud #Python #.Net #SQL (Structured Query Language) #Compliance #Data Architecture
Role description
Job Title: API Data Engineer Location: Fort Mill, SC, or New York – Hybrid Type: Contract (C2C, W2) Look for 14+ years of experience Please find the updated Job description. As an API Data Engineer, you will collaborate with our users and other data product teams to understand their needs and build impactful data/analytics solutions. You will design and build API and data pipelines to support applications and data science projects, following software engineering best practices. • Build and maintain RESTful APIs using C#, .Net and Python on AWS services like EKS & Lambda to enable secure and scalable data access. • Design and implement secure, scalable RESTful APIs to expose data services for internal and external consumption. • Build stored procedures using AWS Aurora Postgres. • Integrate APIs with cloud-native data platforms and ensure performance, reliability, and compliance with enterprise standards. • Implement robust ETL workflows and optimize data infrastructure using SQL and AWS big data technologies. • Collaborate with data architects and analytics teams and other developers to align API and data solutions with enterprise standards and best practices. • Construct cloud-native data pipelines using Airflow, Glue, and PySpark for efficient data transformation and delivery. • Design and develop data applications using Snowflake and Spark to ingest, process, and analyze large datasets. • Implement robust ETL workflows and optimize data infrastructure using SQL and AWS big data technologies. • Ensure high code quality by adding unit tests and following TDD (Test-Driven Development ) to code to achieve best Unit Test coverage. • Collaborate with data architects