

Net2Source Inc.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 12+ years of experience, specializing in the telecom domain. Located onsite in Denver, CO, it offers a pay rate of "unknown" and requires expertise in AWS services, data engineering, and AI integration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#Athena #Security #DMS (Data Migration Service) #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #BI (Business Intelligence) #Spark SQL #Infrastructure as Code (IaC) #Data Engineering #Metadata #Data Modeling #Datasets #SQL (Structured Query Language) #Lambda (AWS Lambda) #Data Quality #Redshift #Python #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #AI (Artificial Intelligence)
Role description
Data Engineer - ( Agentic AI ) and Strong in Telecom Domain
Work site: Denver, CO - Onsite
Note : Please go through the job description before sharing the profiles
Experience : 12+Year minimum
Job Description:
Role Overview
The AWS Data Engineer delivers reliable, governed data products for analytics and AI. The role designs robust pipelines, optimizes cost/performance, and codifies data quality, lineage, and security within an industrialized platform.
Key Responsibilities
• Build ELT/ETL with AWS services (e.g., Glue, EMR/Spark, Lambda, Step Functions, Kinesis/MSK, DMS) and store/query with S3/Lake Formation, Redshift, Athena, and modern table formats as applicable.
• Engineer data products with metadata, contracts, and quality rules; automate with IaC and CI/CD.
• Expose data to AI workloads (feature/embedding stores, vector indices) and BI/analytics consumers.
• Monitor reliability, cost, and SLAs; document datasets and promote re use.
Qualifications
• 4–10+ years in data engineering (Spark, SQL, Python) with AWS analytics stack.
• Strong understanding of data modeling, partitioning, performance optimization, and security (IAM, KMS, networking).
• Experience integrating with analytics/ML teams and productionizing data for AI use cases.
• Preferred certification: AWS Certified Data Engineer – Associate (or equivalent).
Data Engineer - ( Agentic AI ) and Strong in Telecom Domain
Work site: Denver, CO - Onsite
Note : Please go through the job description before sharing the profiles
Experience : 12+Year minimum
Job Description:
Role Overview
The AWS Data Engineer delivers reliable, governed data products for analytics and AI. The role designs robust pipelines, optimizes cost/performance, and codifies data quality, lineage, and security within an industrialized platform.
Key Responsibilities
• Build ELT/ETL with AWS services (e.g., Glue, EMR/Spark, Lambda, Step Functions, Kinesis/MSK, DMS) and store/query with S3/Lake Formation, Redshift, Athena, and modern table formats as applicable.
• Engineer data products with metadata, contracts, and quality rules; automate with IaC and CI/CD.
• Expose data to AI workloads (feature/embedding stores, vector indices) and BI/analytics consumers.
• Monitor reliability, cost, and SLAs; document datasets and promote re use.
Qualifications
• 4–10+ years in data engineering (Spark, SQL, Python) with AWS analytics stack.
• Strong understanding of data modeling, partitioning, performance optimization, and security (IAM, KMS, networking).
• Experience integrating with analytics/ML teams and productionizing data for AI use cases.
• Preferred certification: AWS Certified Data Engineer – Associate (or equivalent).






