

Jobs via Dice
Data Engineer (Self-Represented W-2 Contract)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a self-represented W-2 contract, hybrid (2 days onsite in East Lansing, MI), with a pay rate of "competitive pay." Requires 3+ years of experience, proficiency in Snowflake, ETL processes, and strong analytical skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
East Lansing, MI
-
🧠 - Skills detailed
#BI (Business Intelligence) #Informatica PowerCenter #Oracle #SQL (Structured Query Language) #Linux #Data Pipeline #"ETL (Extract #Transform #Load)" #EDW (Enterprise Data Warehouse) #Windows Server #Cloud #StreamSets #Snowflake #Fivetran #Scripting #IICS (Informatica Intelligent Cloud Services) #Programming #Python #SQL Server #Scala #Informatica #Data Engineering
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, OpTech, is seeking the following. Apply via Dice today!
Why work with us? We are a woman-owned company that values your ideas, encourages your growth, and always has your back. When you work with us, not only do you get health and dental benefits on the first day of employment, but you also have training opportunities, flexible/remote work options, growth opportunities, 401K and competitive pay. Apply today!
Data Engineer
Location: Hybrid: Minimum 2 days per week onsite in East Lansing, MI
Description:
We are looking for a Data Engineer to join our Data Engineering Team. The ideal candidate should have a minimum of 3 years of experience with excellent analytical reasoning and critical thinking skills. The candidate will be a part of a team that creates data pipelines that use change data capture (CDC) mechanisms to move data from on-premises to cloud-based destinations and then transform data to make it available to Customers to consume. The Data Engineering Team also does general extraction, transformation, and load (ETL) work, along with traditional Enterprise Data Warehousing (EDW) work.
Responsibilities:
• Participates in the analysis and development of technical specifications, programming, and testing of Data Engineering components.
• Participates in creating data pipelines and ETL workflows to ensure that design and enterprise programming standards and guidelines are followed. Assist with updating the enterprise standards when gaps are identified.
• Follows technology best practices and standards and escalates any issues as deemed appropriate. Follows architecture and design best practices (as guided by the Lead Data Engineer, BI Architect, and Architectural team.
• Responsible for assisting in configuration and scripting to implement fully automated data pipelines, stored procedures, and functions, and ETL workflows that allow data to flow from on-premises data sources to cloud-based data platforms (e.g. Snowflake) and application platforms (e.g. Salesforce), where data may be consumed by end customers.
• Follows standard change control and configuration management practices.
• Participates in 24-hour on-call rotation in support of the platform.
Required Skills/Qualifications: Database Platforms: Snowflake, Oracle, and SQL Server
OS Platforms: RedHat Enterprise Linux and Windows Server
Languages and Tools: PL/SQL, Python, T-SQL, StreamSets, Snowflake Cloud Data Platform, and Informatica PowerCenter, Informatica IICS or IDMC.
• Experience creating and maintaining ETL processes that use Salesforce as a destination.
• Drive and desire to automate repeatable processes.
• Excellent interpersonal skills and communication, as well as the willingness to collaborate with teams across the organization.
Desired Skills/Qualifications:
• Experience creating and maintaining solutions within Snowflake that involve internal file stages, procedures and functions, tasks, and dynamic tables.
• Experience creating and working with near-real-time data pipelines between relational sources and destinations.
• Experience working with StreamSets Data Collector or similar data streaming/pipelining tools (Fivetran, Striim, Airbyte etc...).
We are an EOE, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. _self_identification/
Dice is the leading career destination for tech experts at every stage of their careers. Our client, OpTech, is seeking the following. Apply via Dice today!
Why work with us? We are a woman-owned company that values your ideas, encourages your growth, and always has your back. When you work with us, not only do you get health and dental benefits on the first day of employment, but you also have training opportunities, flexible/remote work options, growth opportunities, 401K and competitive pay. Apply today!
Data Engineer
Location: Hybrid: Minimum 2 days per week onsite in East Lansing, MI
Description:
We are looking for a Data Engineer to join our Data Engineering Team. The ideal candidate should have a minimum of 3 years of experience with excellent analytical reasoning and critical thinking skills. The candidate will be a part of a team that creates data pipelines that use change data capture (CDC) mechanisms to move data from on-premises to cloud-based destinations and then transform data to make it available to Customers to consume. The Data Engineering Team also does general extraction, transformation, and load (ETL) work, along with traditional Enterprise Data Warehousing (EDW) work.
Responsibilities:
• Participates in the analysis and development of technical specifications, programming, and testing of Data Engineering components.
• Participates in creating data pipelines and ETL workflows to ensure that design and enterprise programming standards and guidelines are followed. Assist with updating the enterprise standards when gaps are identified.
• Follows technology best practices and standards and escalates any issues as deemed appropriate. Follows architecture and design best practices (as guided by the Lead Data Engineer, BI Architect, and Architectural team.
• Responsible for assisting in configuration and scripting to implement fully automated data pipelines, stored procedures, and functions, and ETL workflows that allow data to flow from on-premises data sources to cloud-based data platforms (e.g. Snowflake) and application platforms (e.g. Salesforce), where data may be consumed by end customers.
• Follows standard change control and configuration management practices.
• Participates in 24-hour on-call rotation in support of the platform.
Required Skills/Qualifications: Database Platforms: Snowflake, Oracle, and SQL Server
OS Platforms: RedHat Enterprise Linux and Windows Server
Languages and Tools: PL/SQL, Python, T-SQL, StreamSets, Snowflake Cloud Data Platform, and Informatica PowerCenter, Informatica IICS or IDMC.
• Experience creating and maintaining ETL processes that use Salesforce as a destination.
• Drive and desire to automate repeatable processes.
• Excellent interpersonal skills and communication, as well as the willingness to collaborate with teams across the organization.
Desired Skills/Qualifications:
• Experience creating and maintaining solutions within Snowflake that involve internal file stages, procedures and functions, tasks, and dynamic tables.
• Experience creating and working with near-real-time data pipelines between relational sources and destinations.
• Experience working with StreamSets Data Collector or similar data streaming/pipelining tools (Fivetran, Striim, Airbyte etc...).
We are an EOE, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. _self_identification/





