

Data Engineer / Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer/Developer in NYC, NY (Hybrid) for 12-24+ months at a pay rate of "rate". Requires 8+ years in data acquisition, SQL, and programming (Python, Java, .Net, C++). Experience with Snowflake, Azure, and DevOps tools is desired.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 14, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#Databases #Deployment #Shell Scripting #Data Pipeline #SQL (Structured Query Language) #Data Architecture #Perl #XML (eXtensible Markup Language) #Azure cloud #Python #Linux #GraphQL #Kubernetes #Microservices #SQL Queries #dbt (data build tool) #ML (Machine Learning) #NoSQL #Data Engineering #Snowflake #Cloud #YAML (YAML Ain't Markup Language) #Data Extraction #C++ #Docker #GitLab #Azure #"ETL (Extract #Transform #Load)" #Scripting #.Net #REST (Representational State Transfer) #AI (Artificial Intelligence) #Terraform #DevOps #API (Application Programming Interface) #REST API #Java #Data Ingestion #Normalization #Data Science #Big Data
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Data Engineer / Developer
Location: NYC, NY(Hybrid)
Duration: 12-24+ Months
Key Responsibilities
Primary duties are working in a team of data engineers and data architects to acquire data from internal and external sources, ingest into our repositories, and optimize it for data consumptions by end-users.
β’ Implementing methods to improve data reliability and quality.
β’ Combining raw information from different sources to create consistent formats.
β’ Developing and testing architectures for data extraction and transformation.
β’ Building and maintaining optimal data pipeline architectures.
β’ Assembling large and complex data sets, as well as Big Data technologies.
β’ Ensuring that data is readily available to data scientists, analysts, and other users.
Key Skills:
β’ 8+ years of experience in developing data acquisition, data ingestion, data transformation, file handling, encryption, sFTP protocol, ETL, ELT workloads on-prem and in the cloud.
β’ 8+ years of experience in writing SQL queries, database warehousing: normalization, query optimization, store procedures, views, etc.
β’ 8+ years of experience in Python, Java or .Net or C++ to build scripts, data movement applications and integration APIs.
Desired Qualifications:
β’ Experience in using:
β’ Snowflake.
β’ APIs, containerization, and Kubernetes.
β’ DevOps: Gitlab CI/ CD pipelines (both YAML and UI based), Gitlab runners, Terraform for CICD.
β’ Snaplogic.
β’ Perl/XML.
β’ DBT, Data Build Tool.
β’ REST APIs, GraphQL, RPC, gRPC, and Webhooks protocols.
β’ Linux and shell scripting.
β’ Designing and implementing event-driven architectures, messaging middleware, and microservices.
β’ Azure cloud services for APIs, including Azure Functions, Azure API Management, and Azure App Service.
β’ Containerization technologies, such as Docker and Kubernetes, and their deployments options on Azure: Azure Container App, Azure App Service, Azure Container Instances, and Azure Kubernetes Service.
β’ Best practices for code quality, testing, and performance optimization.
β’ NoSQL databases.
β’ AI / ML concepts and implementations.
β’ Gitlab Duo.