Hired by Matrix, Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, hybrid work model. Required qualifications include a Bachelor's degree, 3+ years of data engineering experience, proficiency in Snowflake, Python, Apache Spark, and SQL.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
416
-
πŸ—“οΈ - Date
October 10, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Orlando, FL
-
🧠 - Skills detailed
#Hadoop #Data Science #Data Warehouse #Data Extraction #Spark (Apache Spark) #Scala #Python #Scripting #SnowPipe #Data Management #Snowflake #Cloud #Informatica #"ETL (Extract #Transform #Load)" #.Net #C# #Deployment #Java #Business Analysis #Airflow #Monitoring #Kafka (Apache Kafka) #SQL Server #AWS (Amazon Web Services) #Data Engineering #Data Pipeline #Datasets #Apache Spark #SQL (Structured Query Language) #Strategy #Data Modeling #Databases
Role description
At-a-Glance: Are you ready to build your career by joining a global hospitality company? If so, our client is hiring a Data Engineer. Position Type: β€’ Contract β€’ Hybrid - Monday, Tuesday, Wednesday in office, Thursday and Friday WFH Required: β€’ Bachelor’s degree in computer and information science required. β€’ Master’s degree preferred. β€’ Snowflake and Python certification preferred but not required. β€’ Ability to build data models and manage data warehouses. β€’ 3 years of related data engineering/IT experience. β€’ 1+ years of proven experience working with Apache Spark framework, Hadoop, Java/Scala, Python and AWS architecture. β€’ 1+ years of proven experience in Microsoft .Net technologies such as C#, VB.Net and experience in designing, developing and deploying Windows & Web applications. β€’ 2+ years of experience in data modeling/database development using PL/SQL and SQL Server 2016 or later and Snowflake. β€’ 1+ years of proven experience building data pipelines and ETL flows in Cloud and on-premise environments using Snowpipe, Informatica, Airflow, Kafka etc. β€’ Excellent listening, interpersonal, communication (written & verbal) and problem-solving skills. β€’ Ability to collect and compile relevant data. β€’ Extremely organized with great attention to detail. β€’ Excellent ability to analyze information and think systematically. β€’ Strong business analysis skills. β€’ A strong team player with some ability to work independently. β€’ Good understanding of the company’s business processes and the industry at large. β€’ Good working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. β€’ Experience building and optimizing data pipelines and data sets leveraging various scripting languages or ETL tools. β€’ Ability to perform root cause analysis on internal and external data processes to answer specific business questions and identify opportunities for improvement. β€’ Good analytic skills related to working with unstructured datasets. β€’ Ability to build and use APIs to push and pull data from various data systems and platforms. β€’ Build processes supporting data extraction, transformation, and loading of data into data structures. β€’ A successful history of manipulating, processing and extracting value from large, disconnected datasets. Responsibilities: β€’ Partner with a wide range of business teams to implement analytical and data solutions that drive business value and customer satisfaction. β€’ Collecting, storing, processing, analyzing, modeling large sets of data and building applications and solutions using data. β€’ Primary focus will be on building, maintaining, implementing, monitoring, supporting and integrating analytical and data solutions with the architecture used across the company. β€’ Maintain and monitor our analytics data warehouses and data platform. β€’ Design, Implement, test, deploy, and maintain stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new sources of data into our central data warehouse, and moving data out to applications and affiliates. β€’ Responsible for hands-on development, deployment, maintenance and support of variety of Cloud and on-premise Solutions, web service infrastructure and supporting technologies. β€’ Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks. β€’ Works closely with project managers, business analysts, data scientists and other groups in the organization to understand and translate functional requirements and processes into technical specifications. β€’ Collaborate with key stakeholders to make sure our data infrastructure meets our business needs in a scalable way. β€’ Keep a critical eye on our technical strategy, identify gaps, and come up with creative solutions. Get in Touch: If you think you'd be a good match, submit your resume and reach out to Karmina at 862-658-6689 to learn more.