

Objectways
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, contracted for an unspecified length, offering a pay rate of "unknown." Location is "remote." Key skills include Python/Java, SQL, and experience with data pipelines. Preferred qualifications involve big data tools and cloud platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 28, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Processing #Azure #Java #Debugging #Data Governance #Data Engineering #Big Data #Data Warehouse #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Ingestion #Data Quality #Data Storage #Version Control #Cloud #Datasets #Databases #Scala #Data Modeling #Monitoring #AWS (Amazon Web Services) #SQL (Structured Query Language) #ML (Machine Learning) #Data Lake #Batch #Kafka (Apache Kafka) #Data Pipeline #Hadoop #GCP (Google Cloud Platform) #Storage #Python
Role description
Role Overview
We are seeking a Data Engineer to design, build, and optimize scalable data pipelines and distributed data systems. This role involves working with large datasets, real-time and batch processing systems, and production-grade data infrastructure.
You will be responsible for transforming raw data into reliable, structured, and high-quality datasets that power analytics, machine learning, and operational systems.
Key Responsibilities
β’ Design and implement scalable ETL/ELT pipelines
β’ Build reliable batch and real-time data processing workflows
β’ Develop data ingestion systems from multiple sources (APIs, streaming, files, databases)
β’ Ensure data quality, validation, and monitoring
β’ Optimize data storage, query performance, and cost efficiency
β’ Design data models and schemas for analytical and operational use cases
β’ Maintain data warehouse and/or data lake environments
β’ Collaborate with backend, ML, and analytics teams
Required Skills
β’ Strong proficiency in Python and/or Java
β’ Solid SQL skills and experience with relational databases
β’ Experience building production-grade data pipelines
β’ Understanding of distributed data processing concepts
β’ Experience with data warehousing and data modeling
β’ Familiarity with version control and CI/CD practices
β’ Strong debugging and performance optimization skills
Preferred Qualifications
β’ Experience with big data tools (Spark, Hadoop, etc.)
β’ Experience with streaming systems (Kafka, Kinesis, etc.)
β’ Experience with cloud platforms (AWS / GCP / Azure)
β’ Experience with data lakes and modern warehouse platforms
β’ Exposure to ML data pipelines or feature engineering workflows
β’ Knowledge of data governance and data quality frameworks
Role Overview
We are seeking a Data Engineer to design, build, and optimize scalable data pipelines and distributed data systems. This role involves working with large datasets, real-time and batch processing systems, and production-grade data infrastructure.
You will be responsible for transforming raw data into reliable, structured, and high-quality datasets that power analytics, machine learning, and operational systems.
Key Responsibilities
β’ Design and implement scalable ETL/ELT pipelines
β’ Build reliable batch and real-time data processing workflows
β’ Develop data ingestion systems from multiple sources (APIs, streaming, files, databases)
β’ Ensure data quality, validation, and monitoring
β’ Optimize data storage, query performance, and cost efficiency
β’ Design data models and schemas for analytical and operational use cases
β’ Maintain data warehouse and/or data lake environments
β’ Collaborate with backend, ML, and analytics teams
Required Skills
β’ Strong proficiency in Python and/or Java
β’ Solid SQL skills and experience with relational databases
β’ Experience building production-grade data pipelines
β’ Understanding of distributed data processing concepts
β’ Experience with data warehousing and data modeling
β’ Familiarity with version control and CI/CD practices
β’ Strong debugging and performance optimization skills
Preferred Qualifications
β’ Experience with big data tools (Spark, Hadoop, etc.)
β’ Experience with streaming systems (Kafka, Kinesis, etc.)
β’ Experience with cloud platforms (AWS / GCP / Azure)
β’ Experience with data lakes and modern warehouse platforms
β’ Exposure to ML data pipelines or feature engineering workflows
β’ Knowledge of data governance and data quality frameworks






