

DSM-H Consulting
Data Engineer - 155
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "$XX/hour." Required skills include AWS, SQL, Python, and 5-7 years in data management. A degree and experience in cloud environments are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 8, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Lifecycle #Metadata #SQS (Simple Queue Service) #DynamoDB #Aurora #AWS (Amazon Web Services) #Data Design #Data Triage #SageMaker #Microservices #BI (Business Intelligence) #Data Quality #Big Data #RDS (Amazon Relational Database Service) #SNS (Simple Notification Service) #Snowflake #Python #Automation #Data Lake #Debugging #Cloud #Data Pipeline #Data Management #Databases #Data Engineering #SQL (Structured Query Language) #IAM (Identity and Access Management) #S3 (Amazon Simple Storage Service) #Monitoring #Data Governance #Lambda (AWS Lambda) #Data Warehouse
Role description
Typical task breakdown:
- Identify, investigate, and obtain resolution commitments for platform and data issues to maintain and improve the quality and performance of assigned digital product data.
- Issue Identification: Reports in all forms from customers, dealers, industry representatives, and subsidiaries.
- Issue Investigation: Statistical analysis, data triage, and infrastructure problem-solving.
- Issue Resolution: Identify root causes, create SageMaker scripts to fix data, and perform break/fix tasks on data pipeline code.
- Develop scripts and automation tools to better detect and correct data issues.
- Develop monitoring and alerting capabilities to proactively detect data issues.
- Work directly on complex application and technical problem identification and resolution, including responding to off-shift and weekend support calls.
- Communicate with end users and internal customers to help direct the development, debugging, and testing of application software for accuracy, integrity, interoperability, and completeness.
- Employee is also responsible for performing other job duties as assigned by CLIENT management from time to time.
Interaction with team:
- Involve in technical sync up and meetings with internal team including US and offshore teams
- Liaise with designers, engineers, and support teams to improve data pipeline performance & reliability.
Work environment:
Work independently and collaborate with internal team and cross functional teams via Teams meetings, chat, and/or email
Education & Experience Required:
- A 4-year degree and/or Masterβs degree from an accredited college or university or equivalent experience
- 5-7 yearsβ experience in data management or data engineering or data operations (data design, data quality, metadata, governance, etc.)
Technical Skills
(Required)
- 5-7 yearsβ experience in data management or data engineering or data operations (data design, data quality, metadata, governance, etc.)
- Experience with development and delivery of microservices using serverless AWS services (S3, RDS, Aurora, DynamoDB, Lambda, SNS, SQS, Kinesis, IAM)
- 4 year or more of experience in a Cloud environment (AWS, Snowflake)
- Strong SQL and Python
(Desired)
- Strong AWS experience, including SageMaker, S3, RDS, Cloudwatch and related services
- Strong Snowflake experience
- Strong SQL, Python development skillset
- Understanding of logical data Domains, primarily Customer & Equipment Domains
- Experience in Data Operations, Tier 2 Support, or comparable Data Engineering Support role
- Strong knowledge of end-to-end data lifecycle across traditional data warehouses, relational databases, operational data stores,business intelligence reporting, and big data analytics.
- Knowledge of data technology products and components for Big Data and Cloud (AWS, Data Lakes, and similar).
- Ability to work collaboratively in a complex, rapidly changing, and culturally diverse environment.
Soft Skills
(Required)
- Verbal and written communication skills. Ability to clearly communicate complex technical ideas
- Problem solving skills, customer service and interpersonal skills
- Ability to work collaboratively in a complex, rapidly changing, and culturally diverse environment
Typical task breakdown:
- Identify, investigate, and obtain resolution commitments for platform and data issues to maintain and improve the quality and performance of assigned digital product data.
- Issue Identification: Reports in all forms from customers, dealers, industry representatives, and subsidiaries.
- Issue Investigation: Statistical analysis, data triage, and infrastructure problem-solving.
- Issue Resolution: Identify root causes, create SageMaker scripts to fix data, and perform break/fix tasks on data pipeline code.
- Develop scripts and automation tools to better detect and correct data issues.
- Develop monitoring and alerting capabilities to proactively detect data issues.
- Work directly on complex application and technical problem identification and resolution, including responding to off-shift and weekend support calls.
- Communicate with end users and internal customers to help direct the development, debugging, and testing of application software for accuracy, integrity, interoperability, and completeness.
- Employee is also responsible for performing other job duties as assigned by CLIENT management from time to time.
Interaction with team:
- Involve in technical sync up and meetings with internal team including US and offshore teams
- Liaise with designers, engineers, and support teams to improve data pipeline performance & reliability.
Work environment:
Work independently and collaborate with internal team and cross functional teams via Teams meetings, chat, and/or email
Education & Experience Required:
- A 4-year degree and/or Masterβs degree from an accredited college or university or equivalent experience
- 5-7 yearsβ experience in data management or data engineering or data operations (data design, data quality, metadata, governance, etc.)
Technical Skills
(Required)
- 5-7 yearsβ experience in data management or data engineering or data operations (data design, data quality, metadata, governance, etc.)
- Experience with development and delivery of microservices using serverless AWS services (S3, RDS, Aurora, DynamoDB, Lambda, SNS, SQS, Kinesis, IAM)
- 4 year or more of experience in a Cloud environment (AWS, Snowflake)
- Strong SQL and Python
(Desired)
- Strong AWS experience, including SageMaker, S3, RDS, Cloudwatch and related services
- Strong Snowflake experience
- Strong SQL, Python development skillset
- Understanding of logical data Domains, primarily Customer & Equipment Domains
- Experience in Data Operations, Tier 2 Support, or comparable Data Engineering Support role
- Strong knowledge of end-to-end data lifecycle across traditional data warehouses, relational databases, operational data stores,business intelligence reporting, and big data analytics.
- Knowledge of data technology products and components for Big Data and Cloud (AWS, Data Lakes, and similar).
- Ability to work collaboratively in a complex, rapidly changing, and culturally diverse environment.
Soft Skills
(Required)
- Verbal and written communication skills. Ability to clearly communicate complex technical ideas
- Problem solving skills, customer service and interpersonal skills
- Ability to work collaboratively in a complex, rapidly changing, and culturally diverse environment






