Hawk Consulting Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a hybrid contract in Lansing, MI, offering a pay rate of "unknown." Key skills include Snowflake, Oracle, SQL Server, PL/SQL, Python, and ETL processes, particularly with Salesforce.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
December 20, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Lansing, MI
-
🧠 - Skills detailed
#Snowflake #StreamSets #"ETL (Extract #Transform #Load)" #Data Engineering #Programming #Data Pipeline #Scala #Linux #Informatica PowerCenter #EDW (Enterprise Data Warehouse) #SQL (Structured Query Language) #IICS (Informatica Intelligent Cloud Services) #Consulting #BI (Business Intelligence) #Informatica #Cloud #Fivetran #SQL Server #Windows Server #Python #Oracle #Scripting
Role description
Hawk Consulting Group is currently seeking a Data Engineer for a hybrid position based in Lansing, MI. Description: The candidate will be a part of a team that creates data pipelines that use change data capture (CDC) mechanisms to move data from on-premises to cloud-based destinations and then transform data to make it available to Customers to consume. The Data Engineering Team also does general extraction, transformation, and load (ETL) work, along with traditional Enterprise Data Warehousing (EDW) work. Responsibilities: • Participates in the analysis and development of technical specifications, programming, and testing of Data Engineering components. • Participates in creating data pipelines and ETL workflows to ensure that design and enterprise programming standards and guidelines are followed. • Assist with updating the enterprise standards when gaps are identified. • Follows technology best practices and standards and escalates any issues as deemed appropriate. Follows architecture and design best practices (as guided by the Lead Data Engineer, BI Architect, and Architectural team. • Responsible for assisting in configuration and scripting to implement fully automated data pipelines, stored procedures, and functions, and ETL workflows that allow data to flow from on-premises data sources to cloud-based data platforms (e.g. Snowflake) and application platforms (e.g. Salesforce), where data may be consumed by end customers. • Follows standard change control and configuration management practices. • Participates in 24-hour on-call rotation in support of the platform. Required Skills/Qualifications: • Database Platforms: Snowflake, Oracle, and SQL Server • OS Platforms: RedHat Enterprise Linux and Windows Server • Languages and Tools: PL/SQL, Python, T-SQL, StreamSets, Snowflake Cloud Data Platform, and Informatica PowerCenter, Informatica IICS or IDMC. • Experience creating and maintaining ETL processes that use Salesforce as a destination. • Drive and desire to automate repeatable processes. Desired Skills/Qualifications: • Experience creating and maintaining solutions within Snowflake that involve internal file stages, procedures and functions, tasks, and dynamic tables. • Experience creating and working with near-real-time data pipelines between relational sources and destinations. • Experience working with StreamSets Data Collector or similar data streaming/pipelining tools (Fivetran, Striim, Airbyte etc...).