

IntegrateUs
Senior Big Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer on a contract basis, offering $70.00 - $80.00 per hour, fully remote. Key skills include proficiency in Python, R, SQL, Apache Spark, and Azure, with required certifications being Databricks Certified Associate Developer and Azure Data Engineer Associate.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
October 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Programming #MLflow #Data Processing #Data Science #Cloud #DevOps #Version Control #Data Lake #Storage #Data Governance #Data Catalog #Data Management #Data Quality #TensorFlow #ADF (Azure Data Factory) #Azure #Spark (Apache Spark) #Spark SQL #Data Pipeline #Data Storage #Data Lineage #Data Manipulation #Compliance #Azure cloud #Data Security #Metadata #R #Delta Lake #Databricks #Azure Data Factory #ADLS (Azure Data Lake Storage) #Libraries #Apache Spark #Big Data #ML (Machine Learning) #Scala #Security #Azure ADLS (Azure Data Lake Storage) #Deployment #Data Warehouse #Debugging #Agile #"ETL (Extract #Transform #Load)" #Data Engineering #Python
Role description
The Worker is responsible for developing, maintaining, and optimizing big data solutions using the Databricks Unified Analytics Platform.
This role supports data engineering, machine learning, and analytics initiatives within this organization that relies on large-scale data processing.
Duties include:
Designing and developing scalable data pipelines
Implementing ETL/ELT workflows
Optimizing Spark jobs
Integrating with Azure Data Factory
Automating deployments
Collaborating with cross-functional teams
Ensuring data quality, governance, and security.
Implement ETL/ELT workflows for both structured and unstructured data
Automate deployments using CI/CD tools
Collaborate with cross-functional teams including data scientists, analysts, and stakeholders
Design and maintain data models, schemas, and database structures to support analytical and operational use cases
Evaluate and implement appropriate data storage solutions, including data lakes (Azure Data Lake Storage) and data warehouses
Implement data validation and quality checks to ensure accuracy and consistency
Contribute to data governance initiatives, including metadata management, data lineage, and data cataloging
Implement data security measures, including encryption, access controls, and auditing; ensure compliance with regulations and best practices
Proficiency in Python and R programming languages
Strong SQL querying and data manipulation skills
Experience with Azure cloud platform
Experience with DevOps, CI/CD pipelines, and version control systems
Working in agile, multicultural environments
Strong troubleshooting and debugging capabilities
Design and develop scalable data pipelines using Apache Spark on Databricks
Optimize Spark jobs for performance and cost-efficiency
Integrate Databricks solutions with cloud services (Azure Data Factory)
Ensure data quality, governance, and security using Unity Catalog or Delta Lake
Deep understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
Hands-on experience with Databricks notebooks, clusters, jobs, and Delta Lake
Knowledge of ML libraries (MLflow, Scikit-learn, TensorFlow)
Databricks Certified Associate Developer for Apache Spark
Azure Data Engineer Associate
Job Type: Contract
Pay: $70.00 - $80.00 per hour
Work Location: Remote
The Worker is responsible for developing, maintaining, and optimizing big data solutions using the Databricks Unified Analytics Platform.
This role supports data engineering, machine learning, and analytics initiatives within this organization that relies on large-scale data processing.
Duties include:
Designing and developing scalable data pipelines
Implementing ETL/ELT workflows
Optimizing Spark jobs
Integrating with Azure Data Factory
Automating deployments
Collaborating with cross-functional teams
Ensuring data quality, governance, and security.
Implement ETL/ELT workflows for both structured and unstructured data
Automate deployments using CI/CD tools
Collaborate with cross-functional teams including data scientists, analysts, and stakeholders
Design and maintain data models, schemas, and database structures to support analytical and operational use cases
Evaluate and implement appropriate data storage solutions, including data lakes (Azure Data Lake Storage) and data warehouses
Implement data validation and quality checks to ensure accuracy and consistency
Contribute to data governance initiatives, including metadata management, data lineage, and data cataloging
Implement data security measures, including encryption, access controls, and auditing; ensure compliance with regulations and best practices
Proficiency in Python and R programming languages
Strong SQL querying and data manipulation skills
Experience with Azure cloud platform
Experience with DevOps, CI/CD pipelines, and version control systems
Working in agile, multicultural environments
Strong troubleshooting and debugging capabilities
Design and develop scalable data pipelines using Apache Spark on Databricks
Optimize Spark jobs for performance and cost-efficiency
Integrate Databricks solutions with cloud services (Azure Data Factory)
Ensure data quality, governance, and security using Unity Catalog or Delta Lake
Deep understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
Hands-on experience with Databricks notebooks, clusters, jobs, and Delta Lake
Knowledge of ML libraries (MLflow, Scikit-learn, TensorFlow)
Databricks Certified Associate Developer for Apache Spark
Azure Data Engineer Associate
Job Type: Contract
Pay: $70.00 - $80.00 per hour
Work Location: Remote