

J RAM IT Consulting
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7-10 years of experience, focused on designing scalable big data solutions using Databricks and Apache Spark on cloud platforms. Contract length is unspecified, with a competitive pay rate. Key skills include advanced SQL, ETL pipelines, and cloud storage expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Delta Lake #Security #Version Control #BigQuery #Databricks #Spark (Apache Spark) #AWS (Amazon Web Services) #Data Processing #ADLS (Azure Data Lake Storage) #Java #Data Quality #PySpark #SQL (Structured Query Language) #Cloud #GCP (Google Cloud Platform) #Azure #Big Data #Scala #Apache Spark #Data Pipeline #Storage #"ETL (Extract #Transform #Load)" #Data Engineering #S3 (Amazon Simple Storage Service)
Role description
We are seeking a 7 to 10 Years of experienced Data Engineer to design and optimize scalable big data solutions using Databricks and Apache Spark across cloud platforms (Azure, AWS, or GCP). You will build high-performance ETL pipelines, manage large-scale data workflows, and ensure data quality, security, and cost efficiency.
Key Responsibilities
• Design, build, and optimize Databricks ETL pipelines.
• Develop scalable Spark-based ingestion and transformation workflows.
• Optimize performance, scalability, and cost of big data systems.
• Implement CI/CD practices and version control for data pipelines.
• Collaborate with cross-functional teams to deliver data-driven solutions.
• Enforce security, governance, and access controls.
Required Skills
• Strong experience with Databricks and Apache Spark (PySpark, Scala, or Java).
• Advanced SQL expertise.
• Experience with cloud platforms and storage solutions (ADLS, S3, BigQuery).
• Hands-on experience with ETL/ELT pipelines and data warehousing.
• Knowledge of Delta Lake, CI/CD, and distributed data processing.
We are seeking a 7 to 10 Years of experienced Data Engineer to design and optimize scalable big data solutions using Databricks and Apache Spark across cloud platforms (Azure, AWS, or GCP). You will build high-performance ETL pipelines, manage large-scale data workflows, and ensure data quality, security, and cost efficiency.
Key Responsibilities
• Design, build, and optimize Databricks ETL pipelines.
• Develop scalable Spark-based ingestion and transformation workflows.
• Optimize performance, scalability, and cost of big data systems.
• Implement CI/CD practices and version control for data pipelines.
• Collaborate with cross-functional teams to deliver data-driven solutions.
• Enforce security, governance, and access controls.
Required Skills
• Strong experience with Databricks and Apache Spark (PySpark, Scala, or Java).
• Advanced SQL expertise.
• Experience with cloud platforms and storage solutions (ADLS, S3, BigQuery).
• Hands-on experience with ETL/ELT pipelines and data warehousing.
• Knowledge of Delta Lake, CI/CD, and distributed data processing.






