

Resource Informatics Group, Inc
Data Engineer (Databricks, SQL)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Databricks, SQL) with a 6+ year background in ETL/data engineering, 4+ years in SQL development, and 3+ years using Databricks/Spark. Remote position; contract length and pay rate unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 2, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#GCP (Google Cloud Platform) #Scala #Data Governance #Databricks #Azure #Data Engineering #Datasets #Spark (Apache Spark) #Data Quality #Python #SQL Queries #AWS (Amazon Web Services) #Complex Queries #Data Warehouse #"ETL (Extract #Transform #Load)" #Indexing #Visualization #Computer Science #Data Extraction #Compliance #Normalization #Security #Data Pipeline #SQL (Structured Query Language) #Programming #Cloud #Data Framework
Role description
ROle: Data Engineer (Databricks, SQL)
No. of positions: 1
Client Location: On Shore, US
Mode of work: Remote
Work Time Zone: EST or CST
About the Job Summary:
Weβre looking for a skilled Data Engineer with strong expertise in Databricks and SQL to join our data analytics team. You will work as part of a cross-functional team to design, build, and optimize data pipelines, frameworks, and warehouses to support business-critical analytics and reporting. The role requires deep-on experience in SQL-based transformations, Databricks, and modern data engineering practices.
Experience:
β’ 6+ years of relevant experience or equivalent education in ETL processing/data engineering or related field.
β’ 4+ years of experience with SQL development (complex queries, performance tuning, stored procedures).
β’ 3+ years of experience building data pipelines on Databricks/Spark (Python/Scala).
β’ Exposure to cloud data platforms(Azure/AWS/GCP) is a plus.
Roles & Responsibilities:
β’ Design, develop, and maintain data pipelines and ETL processes using Databricks and SQL.
β’ Write optimized SQL queries for data extraction, transformation, and loading across large-scale datasets.
β’ Monitor, validate, and optimize data movement, cleansing, normalization, and updating processesto ensure data quality, consistency, and reliability.
β’ Collaborate with business and analytics teams to define data models, schemas, and frameworks within the data warehouse.
β’ Document source-to-target mapping and transformation logic.
β’ Build data frameworks and visualizationsto support analytics and reporting.
β’ Ensure compliance with data governance,security, and regulatory standards.
β’ Communicate effectively with internal and externalstakeholders to understand and deliver on data needs.
Qualifications & Experience:
β’ Bachelorβs degree in Computer Science, Data Engineering,
β’ 6+ years of hands-on experience in ETL/data engineering.
β’ Proficiency in the Python programming language.
β’ Strong SQL development experience (query optimization, indexing strategies,stored procedures).
β’ 3+ years of experience in Spark.
β’ 3+ years of Databricks experience with Python/Scala.
β’ Experience with cloud platforms(Azure/AWS/GCP) preferred.
β’ Databricks or cloud certifications(SQL, Databricks, Azure Data Engineer) are a strong plus.
ROle: Data Engineer (Databricks, SQL)
No. of positions: 1
Client Location: On Shore, US
Mode of work: Remote
Work Time Zone: EST or CST
About the Job Summary:
Weβre looking for a skilled Data Engineer with strong expertise in Databricks and SQL to join our data analytics team. You will work as part of a cross-functional team to design, build, and optimize data pipelines, frameworks, and warehouses to support business-critical analytics and reporting. The role requires deep-on experience in SQL-based transformations, Databricks, and modern data engineering practices.
Experience:
β’ 6+ years of relevant experience or equivalent education in ETL processing/data engineering or related field.
β’ 4+ years of experience with SQL development (complex queries, performance tuning, stored procedures).
β’ 3+ years of experience building data pipelines on Databricks/Spark (Python/Scala).
β’ Exposure to cloud data platforms(Azure/AWS/GCP) is a plus.
Roles & Responsibilities:
β’ Design, develop, and maintain data pipelines and ETL processes using Databricks and SQL.
β’ Write optimized SQL queries for data extraction, transformation, and loading across large-scale datasets.
β’ Monitor, validate, and optimize data movement, cleansing, normalization, and updating processesto ensure data quality, consistency, and reliability.
β’ Collaborate with business and analytics teams to define data models, schemas, and frameworks within the data warehouse.
β’ Document source-to-target mapping and transformation logic.
β’ Build data frameworks and visualizationsto support analytics and reporting.
β’ Ensure compliance with data governance,security, and regulatory standards.
β’ Communicate effectively with internal and externalstakeholders to understand and deliver on data needs.
Qualifications & Experience:
β’ Bachelorβs degree in Computer Science, Data Engineering,
β’ 6+ years of hands-on experience in ETL/data engineering.
β’ Proficiency in the Python programming language.
β’ Strong SQL development experience (query optimization, indexing strategies,stored procedures).
β’ 3+ years of experience in Spark.
β’ 3+ years of Databricks experience with Python/Scala.
β’ Experience with cloud platforms(Azure/AWS/GCP) preferred.
β’ Databricks or cloud certifications(SQL, Databricks, Azure Data Engineer) are a strong plus.






