

Ocean Blue Solutions Inc
Data Engineer - Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "$/hour." It requires expertise in data pipeline development, ETL processes, and proficiency in programming languages like Python and Java. Remote work location.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
Unknown
-
ποΈ - Date
February 25, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Columbus, OH
-
π§ - Skills detailed
#Distributed Computing #NoSQL #Data Access #Documentation #Kafka (Apache Kafka) #Programming #Tableau #Database Systems #Java #Data Storage #Azure Data Factory #SQL (Structured Query Language) #Azure #BI (Business Intelligence) #AWS (Amazon Web Services) #Computer Science #Looker #Web Services #ADF (Azure Data Factory) #Data Science #Cloud #Data Lake #Microsoft Power BI #Storage #Databricks #Snowflake #Data Pipeline #Security #Scala #Data Warehouse #Data Security #PySpark #Data Modeling #"ETL (Extract #Transform #Load)" #Data Integration #Spark (Apache Spark) #Python #Data Engineering #API (Application Programming Interface) #GCP (Google Cloud Platform) #Informatica #Databases #Hadoop #Oracle #Azure SQL #SQL Server #Big Data
Role description
Data Engineer - Remote
9 hours ago
columbus,ohio
Data Engineer
Location: Remote
Job Description
Overview:
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
Responsibilities
Data Pipeline Development
Design, implement, and optimize end-to-end data pipelines for ingesting,
processing, and transforming large volumes of structured and unstructured data.
Develop robust ETL (Extract, Transform, Load) processes to integrate data from
diverse sources into our data ecosystem.
Implement data validation and quality checks to ensure accuracy and consistency.
Data Modeling and Architecture
Design and maintain data models, schemas, and database structures to support analytical and operational use cases.
Optimize data storage and retrieval mechanisms for performance and scalability.
Evaluate and implement data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services.
Data Integration and API Development
Build and maintain integrations with internal and external data sources and APIs.
Implement RESTful APIs and web services for data access and consumption.
Ensure compatibility and interoperability between different systems and platforms.
Data Infrastructure Management
Configure and manage data infrastructure components, including databases, data warehouses, data lakes, and distributed computing frameworks.
Monitor system performance, troubleshoot issues, and implement optimizations to enhance reliability and efficiency.
Implement data security controls and access management policies to protect sensitive information.
Collaboration and Documentation
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver tailored solutions.
Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation.
Provide technical guidance and support to team members and stakeholders as needed.
Collaborate with analysts and platform teams; participate in reviews, sprints, POCs, and reusable frameworks
Requirements:
Bachelorβs or masterβs degree in computer science, Information Systems or related fields
Proven experience in data engineering, software development, or related roles.
Proficiency in programming languages commonly used in data engineering (e.g., Python, PySpark,Java, Scala, etc.).
Strong knowledge of database systems, data modeling techniques, and SQL proficiency (e.g., Snowflake, MS Fabric, SQL Server, Oracle, Azure SQL).
Proficiency with ETL tools commonly used in data engineering (e.g., Informatica, Databricks, Azure Data Factory).
Experience with dashboard and reporting tools (e.g., Tableau, Power BI, Looker, etc.).
Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka, etc.).
Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud Platform, etc.).
Excellent problem-solving skills and attention to detail.
Effective communication and collaboration skills in a team-oriented environment.
Ability to adapt to evolving technologies and business requirements.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
Data Engineer - Remote
9 hours ago
columbus,ohio
Data Engineer
Location: Remote
Job Description
Overview:
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
Responsibilities
Data Pipeline Development
Design, implement, and optimize end-to-end data pipelines for ingesting,
processing, and transforming large volumes of structured and unstructured data.
Develop robust ETL (Extract, Transform, Load) processes to integrate data from
diverse sources into our data ecosystem.
Implement data validation and quality checks to ensure accuracy and consistency.
Data Modeling and Architecture
Design and maintain data models, schemas, and database structures to support analytical and operational use cases.
Optimize data storage and retrieval mechanisms for performance and scalability.
Evaluate and implement data storage solutions, including relational databases, NoSQL databases, data lakes, and cloud storage services.
Data Integration and API Development
Build and maintain integrations with internal and external data sources and APIs.
Implement RESTful APIs and web services for data access and consumption.
Ensure compatibility and interoperability between different systems and platforms.
Data Infrastructure Management
Configure and manage data infrastructure components, including databases, data warehouses, data lakes, and distributed computing frameworks.
Monitor system performance, troubleshoot issues, and implement optimizations to enhance reliability and efficiency.
Implement data security controls and access management policies to protect sensitive information.
Collaboration and Documentation
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver tailored solutions.
Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation.
Provide technical guidance and support to team members and stakeholders as needed.
Collaborate with analysts and platform teams; participate in reviews, sprints, POCs, and reusable frameworks
Requirements:
Bachelorβs or masterβs degree in computer science, Information Systems or related fields
Proven experience in data engineering, software development, or related roles.
Proficiency in programming languages commonly used in data engineering (e.g., Python, PySpark,Java, Scala, etc.).
Strong knowledge of database systems, data modeling techniques, and SQL proficiency (e.g., Snowflake, MS Fabric, SQL Server, Oracle, Azure SQL).
Proficiency with ETL tools commonly used in data engineering (e.g., Informatica, Databricks, Azure Data Factory).
Experience with dashboard and reporting tools (e.g., Tableau, Power BI, Looker, etc.).
Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka, etc.).
Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud Platform, etc.).
Excellent problem-solving skills and attention to detail.
Effective communication and collaboration skills in a team-oriented environment.
Ability to adapt to evolving technologies and business requirements.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.




