

Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect/Sr. Data Engineer, remote across the USA, with a contract length of over 6 months. Requires 5+ years in data engineering, strong AWS experience, proficiency in Python/SQL, and expertise in IoT/OT within manufacturing or energy sectors.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Fixed Term
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Snowflake #Datasets #DevSecOps #Security #Infrastructure as Code (IaC) #Redshift #Batch #Data Quality #AWS (Amazon Web Services) #Computer Science #Data Architecture #Data Pipeline #Data Security #RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #DMS (Data Migration Service) #Replication #Scala #Cybersecurity #Lambda (AWS Lambda) #Databricks #Data Engineering #Python #Metadata #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Programming #Logging #Spark (Apache Spark) #Data Lifecycle #VMware #IoT (Internet of Things) #ML (Machine Learning) #AI (Artificial Intelligence) #GitLab #Migration #AWS Glue #Databases #AWS DMS (AWS Database Migration Service) #Airflow #Cloud
Role description
About us:
Intuitive is an innovation-led engineering company delivering business outcomes for 100βs of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers:
Modernization & Migration
β’ Application & Database Modernization
β’ Platform Engineering (IaC/EaC, DevSecOps & SRE)
β’ Cloud Native Engineering, Migration to Cloud, VMware Exit
β’ FinOps
Data & AI/ML
β’ Data (Cloud Native / DataBricks / Snowflake)
β’ Machine Learning, AI/GenAI
Cybersecurity
β’ Infrastructure Security
β’ Application Security
β’ Data Security
β’ AI/Model Security
SDx & Digital Workspace (M365, G-suite)
β’ SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility
β’ Email, Collaboration, Directory Services, Shared Files Services
Intuitive Services:
β’ Professional and Advisory Services
β’ Elastic Engineering Services
β’ Managed Services
β’ Talent Acquisition & Platform Resell Services
About the job:
Title: Data Architect/ Sr. Data Engineer
Start Date: Immediately
# of Positions: 1
Position Type: Full Time/ Contract
Location: Remote across USA
Must have: Strong experience with IOT/ OT/ Sensors and background working with manufacturing/ Oil & Gas/ Energy industry.
What we do:
We build scalable, auditable, and production-grade data infrastructure using AWS-native services. Our team enables robust data pipelines, real-time integrations, and analytics-ready datasets that power enterprise decision-making.
Responsibilities
As a Data Engineer, you will design, develop, and maintain data pipelines that support enterprise-grade data engineering and management workflows. Youβll work primarily within the AWS ecosystem, leveraging services such as Glue, DMS, Lambda, RDS, Redshift, CloudWatch, and IoT Core, with Python as the preferred programming language. Databricks may be used selectively for specific workloads.
Your role as a Data Engineer includes:
β’ Designing and implementing scalable data pipelines using AWS Glue, Lambda, and Python
β’ Building ingestion and transformation workflows across structured and semi-structured data sources, including RDS, Redshift, S3, and IoT Core
β’ Integrating AWS DMS for real-time and batch replication from operational databases
β’ Establishing robust data quality checks and audit-friendly logging across the data lifecycle
β’ Document data engineering processes, architecture, and configurations.
β’ Collaborating with cross-functional teams to translate business requirements into technical solutions
β’ Troubleshooting and resolving data issues across AWS-native and hybrid environments.
What we are looking for:
β’ Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
β’ Minimum of 5 years of experience in data engineering roles, with a focus on AWS-native services.
β’ Proficiency in Python and SQL for data transformation, validation, and orchestration.
β’ Hands-on experience with AWS Glue, Lambda, DMS, RDS, Redshift, S3, CloudWatch, and IoT Core.
β’ Familiarity with Databricks and Spark for metadata-heavy or large-scale processing.
β’ Experience with CI/CD pipelines using ADO, GitLab or similar tools.
β’ Experience with data pipeline orchestration tools (e.g Airflow, Step Funcitons).
β’ Ability to troubleshoot complex data issues and implement effective solutions.
β’ Strong communication and interpersonal skills.
β’ Ability to work collaboratively in a team-oriented environment.
β’ Proactive in staying updated with industry trends and emerging technologies in data engineering.
About us:
Intuitive is an innovation-led engineering company delivering business outcomes for 100βs of Enterprises globally. With the reputation of being a Tiger Team & a Trusted Partner of enterprise technology leaders, we help solve the most complex Digital Transformation challenges across following Intuitive Superpowers:
Modernization & Migration
β’ Application & Database Modernization
β’ Platform Engineering (IaC/EaC, DevSecOps & SRE)
β’ Cloud Native Engineering, Migration to Cloud, VMware Exit
β’ FinOps
Data & AI/ML
β’ Data (Cloud Native / DataBricks / Snowflake)
β’ Machine Learning, AI/GenAI
Cybersecurity
β’ Infrastructure Security
β’ Application Security
β’ Data Security
β’ AI/Model Security
SDx & Digital Workspace (M365, G-suite)
β’ SDDC, SD-WAN, SDN, NetSec, Wireless/Mobility
β’ Email, Collaboration, Directory Services, Shared Files Services
Intuitive Services:
β’ Professional and Advisory Services
β’ Elastic Engineering Services
β’ Managed Services
β’ Talent Acquisition & Platform Resell Services
About the job:
Title: Data Architect/ Sr. Data Engineer
Start Date: Immediately
# of Positions: 1
Position Type: Full Time/ Contract
Location: Remote across USA
Must have: Strong experience with IOT/ OT/ Sensors and background working with manufacturing/ Oil & Gas/ Energy industry.
What we do:
We build scalable, auditable, and production-grade data infrastructure using AWS-native services. Our team enables robust data pipelines, real-time integrations, and analytics-ready datasets that power enterprise decision-making.
Responsibilities
As a Data Engineer, you will design, develop, and maintain data pipelines that support enterprise-grade data engineering and management workflows. Youβll work primarily within the AWS ecosystem, leveraging services such as Glue, DMS, Lambda, RDS, Redshift, CloudWatch, and IoT Core, with Python as the preferred programming language. Databricks may be used selectively for specific workloads.
Your role as a Data Engineer includes:
β’ Designing and implementing scalable data pipelines using AWS Glue, Lambda, and Python
β’ Building ingestion and transformation workflows across structured and semi-structured data sources, including RDS, Redshift, S3, and IoT Core
β’ Integrating AWS DMS for real-time and batch replication from operational databases
β’ Establishing robust data quality checks and audit-friendly logging across the data lifecycle
β’ Document data engineering processes, architecture, and configurations.
β’ Collaborating with cross-functional teams to translate business requirements into technical solutions
β’ Troubleshooting and resolving data issues across AWS-native and hybrid environments.
What we are looking for:
β’ Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
β’ Minimum of 5 years of experience in data engineering roles, with a focus on AWS-native services.
β’ Proficiency in Python and SQL for data transformation, validation, and orchestration.
β’ Hands-on experience with AWS Glue, Lambda, DMS, RDS, Redshift, S3, CloudWatch, and IoT Core.
β’ Familiarity with Databricks and Spark for metadata-heavy or large-scale processing.
β’ Experience with CI/CD pipelines using ADO, GitLab or similar tools.
β’ Experience with data pipeline orchestration tools (e.g Airflow, Step Funcitons).
β’ Ability to troubleshoot complex data issues and implement effective solutions.
β’ Strong communication and interpersonal skills.
β’ Ability to work collaboratively in a team-oriented environment.
β’ Proactive in staying updated with industry trends and emerging technologies in data engineering.