Objectways

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, contracted for an unspecified length, offering a pay rate of "unknown." Location is "remote." Key skills include Python/Java, SQL, and experience with data pipelines. Preferred qualifications involve big data tools and cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 28, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Processing #Azure #Java #Debugging #Data Governance #Data Engineering #Big Data #Data Warehouse #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Ingestion #Data Quality #Data Storage #Version Control #Cloud #Datasets #Databases #Scala #Data Modeling #Monitoring #AWS (Amazon Web Services) #SQL (Structured Query Language) #ML (Machine Learning) #Data Lake #Batch #Kafka (Apache Kafka) #Data Pipeline #Hadoop #GCP (Google Cloud Platform) #Storage #Python
Role description
Role Overview We are seeking a Data Engineer to design, build, and optimize scalable data pipelines and distributed data systems. This role involves working with large datasets, real-time and batch processing systems, and production-grade data infrastructure. You will be responsible for transforming raw data into reliable, structured, and high-quality datasets that power analytics, machine learning, and operational systems. Key Responsibilities β€’ Design and implement scalable ETL/ELT pipelines β€’ Build reliable batch and real-time data processing workflows β€’ Develop data ingestion systems from multiple sources (APIs, streaming, files, databases) β€’ Ensure data quality, validation, and monitoring β€’ Optimize data storage, query performance, and cost efficiency β€’ Design data models and schemas for analytical and operational use cases β€’ Maintain data warehouse and/or data lake environments β€’ Collaborate with backend, ML, and analytics teams Required Skills β€’ Strong proficiency in Python and/or Java β€’ Solid SQL skills and experience with relational databases β€’ Experience building production-grade data pipelines β€’ Understanding of distributed data processing concepts β€’ Experience with data warehousing and data modeling β€’ Familiarity with version control and CI/CD practices β€’ Strong debugging and performance optimization skills Preferred Qualifications β€’ Experience with big data tools (Spark, Hadoop, etc.) β€’ Experience with streaming systems (Kafka, Kinesis, etc.) β€’ Experience with cloud platforms (AWS / GCP / Azure) β€’ Experience with data lakes and modern warehouse platforms β€’ Exposure to ML data pipelines or feature engineering workflows β€’ Knowledge of data governance and data quality frameworks