Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Programming-Focused Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills include Python or Java, Databricks, ETL/ELT, and experience with cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Los Angeles Metropolitan Area
-
🧠 - Skills detailed
#Databases #Big Data #Cloud #Spark (Apache Spark) #Datasets #Databricks #ML (Machine Learning) #Java #Airflow #Data Governance #AWS (Amazon Web Services) #SQL (Structured Query Language) #Data Quality #Data Engineering #Data Modeling #Azure #Data Science #Python #Scala #Delta Lake #"ETL (Extract #Transform #Load)" #Data Processing #Data Pipeline #Programming #GCP (Google Cloud Platform) #Apache Spark
Role description
We need a Programming-Focused Data Engineer with strong experience in Python or Java and hands-on expertise in Databricks. Need With MCP Server skill set Job Title: Programming-Focused Data Engineer About the Role We are looking for a Data Engineer with strong programming skills and hands-on experience in modern data processing frameworks. This role will focus on building, optimizing, and maintaining scalable data pipelines to support analytics, reporting, and machine learning initiatives. Key Responsibilities β€’ Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data. β€’ Write efficient, maintainable code in Python, Java, or other modern programming languages. β€’ Work with Apache Spark (preferably Databricks) to process large datasets. β€’ Integrate data from multiple sources, ensuring data quality and consistency. β€’ Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets. β€’ Optimize data workflows for performance, scalability, and cost efficiency. β€’ Troubleshoot and resolve issues in data pipelines and infrastructure. Required Skills & Experience β€’ Strong programming skills in Java and/or Python (other languages a plus). β€’ Solid understanding of data engineering concepts β€” ETL/ELT, data modeling, and data warehousing. β€’ Hands-on experience with Apache Spark (preferably Databricks). β€’ Familiarity with SQL and working with relational databases. β€’ Experience with cloud platforms (AWS/Azure/GCP) is a plus. β€’ Strong problem-solving skills and attention to detail. Preferred Qualifications β€’ Experience with Delta Lake or Iceberg. β€’ Exposure to workflow orchestration tools (Airflow, Dagster, etc.). β€’ Knowledge of data governance and quality frameworks. β€’ Background in big data and distributed systems.