Novia Infotech

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown". Key skills include Databricks, PySpark, Python, and SQL. Experience in ETL processes and distributed data processing is required. Remote work location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Datasets #Data Modeling #GCP (Google Cloud Platform) #Security #Cloud #Data Security #Scripting #Data Engineering #Python #"ETL (Extract #Transform #Load)" #Azure #Data Science #Automation #Data Pipeline #Data Quality #Data Extraction #PySpark #Databricks #BI (Business Intelligence) #Scala #Data Processing #Databases #Spark (Apache Spark) #SQL Queries #AWS (Amazon Web Services) #SQL (Structured Query Language)
Role description
Role: Data Engineer Location: Remote Job Description: We are looking for a skilled Data Engineer to join our team. The ideal candidate will be responsible for building and managing data pipelines, developing scalable data solutions, and working with large datasets to support data analytics and business intelligence initiatives. Key Responsibilities: • Design, develop, and maintain ETL pipelines using Databricks, PySpark, and Python. • Write efficient SQL queries for data extraction, transformation, and loading from various data sources. • Work with structured and unstructured data from multiple sources. • Optimize data workflows for performance and scalability. • Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements. • Monitor and troubleshoot data pipeline performance and data quality issues. • Implement best practices for data security, quality, and governance. Required Skills: • Strong experience in Databricks and PySpark for data processing. • Proficient in Python for scripting and automation. • Good knowledge of SQL for querying relational databases. • Experience in building and managing ETL processes. • Understanding of distributed data processing frameworks. • Strong problem-solving and analytical skills. Preferred Qualifications: • Knowledge of cloud platforms like AWS, Azure, or GCP. • Familiarity with data modeling and warehousing concepts.