

Consultant
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in US Retail/Manufacturing. Contract length is unspecified, remote work in the USA, with a pay rate of "TBD." Key skills include Python, SQL, AWS, and strong RDBMS experience.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 26, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
United States
🧠 - Skills detailed
#Informatica #EC2 #Spark (Apache Spark) #Databricks #Luigi #RDBMS (Relational Database Management System) #Consul #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Airflow #Hadoop #SQL (Structured Query Language) #Data Science #Scala #Snowflake #Programming #JavaScript #Elasticsearch #Big Data #Data Warehouse #MongoDB #NoSQL #Data Pipeline #AI (Artificial Intelligence) #dbt (data build tool) #Databases #Data Engineering #Redshift #S3 (Amazon Simple Storage Service)
Role description
Position- Data Engineer Experience- 5+years
Educational Qualification - B. Tech, B.Sc. (IT), MCA
Domain - US Retail / Manufacturing
Work Location: Remote (USA)
Job Description
Shorthills AI is looking for an experienced Data Engineer to join our Technology Team. In this role, one will use various methods to transform raw data into useful data systems like creating algorithms and conduct statistical analysis. Overall, one will strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, Data engineer skills also include familiarity with Python.
Responsibilities:
Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build the infrastructure required for optimal extraction, transformation, and loading of
data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into
customer acquisition, operational efficiency and other key business performance metrics.
Create data tools for analytics and data scientist team members that assist them in
building and optimizing our product into an innovative industry leader.
Must Have Skills
Experience with object-oriented programming in Python, JavaScript etc.
Ability to efficiently work with high variety and volume of data.
Strong RDBMS experience.
Ability to write well-crafted, readable, testable, maintainable and modular code.
Basic understanding of working within AWS infrastructure - EC2, S3, AWS console etc.
Sound understanding of Algorithms and Data structures.
Good To Have
Preferred exposure to NoSQL databases like MongoDB, Cassandra, Elasticsearch.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow,
etc.
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with data warehouses like Snowflake, Redshift.
Hands-on experience in building scalable data pipelines using tools like Airflow, Data factory, Databricks ,DBT, or Informatica.
Position- Data Engineer Experience- 5+years
Educational Qualification - B. Tech, B.Sc. (IT), MCA
Domain - US Retail / Manufacturing
Work Location: Remote (USA)
Job Description
Shorthills AI is looking for an experienced Data Engineer to join our Technology Team. In this role, one will use various methods to transform raw data into useful data systems like creating algorithms and conduct statistical analysis. Overall, one will strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, Data engineer skills also include familiarity with Python.
Responsibilities:
Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build the infrastructure required for optimal extraction, transformation, and loading of
data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into
customer acquisition, operational efficiency and other key business performance metrics.
Create data tools for analytics and data scientist team members that assist them in
building and optimizing our product into an innovative industry leader.
Must Have Skills
Experience with object-oriented programming in Python, JavaScript etc.
Ability to efficiently work with high variety and volume of data.
Strong RDBMS experience.
Ability to write well-crafted, readable, testable, maintainable and modular code.
Basic understanding of working within AWS infrastructure - EC2, S3, AWS console etc.
Sound understanding of Algorithms and Data structures.
Good To Have
Preferred exposure to NoSQL databases like MongoDB, Cassandra, Elasticsearch.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow,
etc.
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with data warehouses like Snowflake, Redshift.
Hands-on experience in building scalable data pipelines using tools like Airflow, Data factory, Databricks ,DBT, or Informatica.