

4 Corner Resources
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer for a 1-year contract, hybrid location (3 days onsite, 2 days remote), paying up to $52/hr. Requires a Bachelor's in Computer Science, 3+ years with Snowflake, Python, strong SQL skills, and data pipeline experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
416
-
ποΈ - Date
March 16, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Orlando, FL 32801
-
π§ - Skills detailed
#SQL (Structured Query Language) #Snowflake #Informatica #Scala #Data Warehouse #Data Engineering #Airflow #Automation #SQL Server #Kafka (Apache Kafka) #C# #SnowPipe #Databases #Data Science #Datasets #Data Extraction #Python #Computer Science #Spark (Apache Spark) #Strategy #Data Management #Scripting #AWS (Amazon Web Services) #Cloud #"ETL (Extract #Transform #Load)" #Data Pipeline #Business Analysis #Data Integration #Apache Spark #Java #.Net #Hadoop
Role description
Data Engineer Location: Hybrid β 3 days onsite, 2 days remote Pay Rate: Up to $52/hr (based on experience) Position Type: 1-year contract
Introduction
4 Corner Resources is seeking a Data Engineer for one of our clients to support the design, development, and maintenance of enterprise data platforms and analytics infrastructure. This role focuses on building scalable data pipelines, integrating new data sources, and ensuring reliable movement of data across applications and systems.
The ideal candidate will have strong experience working with modern data warehouse technologies, cloud platforms, and scripting languages to support data engineering and analytics initiatives. This role works closely with cross-functional teams including project managers, business analysts, and data scientists to translate business requirements into scalable technical solutions.
Required Qualifications
Bachelorβs degree in Computer and Information Science or a related field
Minimum 3 years of experience with Snowflake and Python
At least 3 years of related data engineering or IT experience
Strong SQL skills and experience working with relational databases
Experience building and optimizing data pipelines and large-scale datasets
Experience designing and managing data warehouses and data models
Ability to perform root cause analysis on internal and external data processes
Experience working with structured and unstructured datasets
Strong analytical, problem-solving, and communication skills
Ability to work both independently and collaboratively within cross-functional teams
Strong attention to detail and organizational skills
Ability to translate business requirements into technical solutions
Preferred Qualifications
Masterβs degree in Computer Science or related field
Experience with Apache Spark, Hadoop, Java/Scala, Python, and AWS architecture
Experience with Microsoft .NET technologies (C#, VB.Net) and development of Windows or web applications
Experience with PL/SQL, SQL Server 2016 or later, and Snowflake
Experience building ETL pipelines using Snowpipe, Informatica, Airflow, Kafka, or similar tools
Experience building and consuming APIs for data integration
Experience working with cloud and on-premise data infrastructure
Day-to-Day Responsibilities
Maintain and monitor analytics data warehouses and data platforms
Design, implement, test, deploy, and maintain scalable data engineering pipelines
Integrate new data sources into the central data warehouse and distribute data to applications and partners
Develop scalable code and automation to streamline repetitive data management tasks
Build processes supporting data extraction, transformation, and loading (ETL)
Collaborate with project managers, business analysts, and data scientists to translate requirements into technical specifications
Develop and support cloud and on-premise data infrastructure solutions
Build and maintain APIs to move data across systems and platforms
Analyze large and disconnected datasets to extract meaningful insights
Monitor technical strategy, identify infrastructure gaps, and propose scalable solutions 4CR2
Data Engineer Location: Hybrid β 3 days onsite, 2 days remote Pay Rate: Up to $52/hr (based on experience) Position Type: 1-year contract
Introduction
4 Corner Resources is seeking a Data Engineer for one of our clients to support the design, development, and maintenance of enterprise data platforms and analytics infrastructure. This role focuses on building scalable data pipelines, integrating new data sources, and ensuring reliable movement of data across applications and systems.
The ideal candidate will have strong experience working with modern data warehouse technologies, cloud platforms, and scripting languages to support data engineering and analytics initiatives. This role works closely with cross-functional teams including project managers, business analysts, and data scientists to translate business requirements into scalable technical solutions.
Required Qualifications
Bachelorβs degree in Computer and Information Science or a related field
Minimum 3 years of experience with Snowflake and Python
At least 3 years of related data engineering or IT experience
Strong SQL skills and experience working with relational databases
Experience building and optimizing data pipelines and large-scale datasets
Experience designing and managing data warehouses and data models
Ability to perform root cause analysis on internal and external data processes
Experience working with structured and unstructured datasets
Strong analytical, problem-solving, and communication skills
Ability to work both independently and collaboratively within cross-functional teams
Strong attention to detail and organizational skills
Ability to translate business requirements into technical solutions
Preferred Qualifications
Masterβs degree in Computer Science or related field
Experience with Apache Spark, Hadoop, Java/Scala, Python, and AWS architecture
Experience with Microsoft .NET technologies (C#, VB.Net) and development of Windows or web applications
Experience with PL/SQL, SQL Server 2016 or later, and Snowflake
Experience building ETL pipelines using Snowpipe, Informatica, Airflow, Kafka, or similar tools
Experience building and consuming APIs for data integration
Experience working with cloud and on-premise data infrastructure
Day-to-Day Responsibilities
Maintain and monitor analytics data warehouses and data platforms
Design, implement, test, deploy, and maintain scalable data engineering pipelines
Integrate new data sources into the central data warehouse and distribute data to applications and partners
Develop scalable code and automation to streamline repetitive data management tasks
Build processes supporting data extraction, transformation, and loading (ETL)
Collaborate with project managers, business analysts, and data scientists to translate requirements into technical specifications
Develop and support cloud and on-premise data infrastructure solutions
Build and maintain APIs to move data across systems and platforms
Analyze large and disconnected datasets to extract meaningful insights
Monitor technical strategy, identify infrastructure gaps, and propose scalable solutions 4CR2






