

E-Solutions
Senior SAS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior SAS Data Engineer, offering a remote contract for an experienced professional with 8+ years in Data Engineering. Key skills include Databricks, Spark, Python, and SAS, with a focus on ETL pipelines and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 22, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AWS S3 (Amazon Simple Storage Service) #Batch #Monitoring #"ETL (Extract #Transform #Load)" #Macros #PySpark #SQL (Structured Query Language) #Scala #Delta Lake #S3 (Amazon Simple Storage Service) #Data Pipeline #Azure #Data Engineering #Data Governance #SAS #Apache Spark #ADF (Azure Data Factory) #ADLS (Azure Data Lake Storage) #Databricks #Linux #AWS (Amazon Web Services) #Shell Scripting #Scripting #Python #Documentation #Data Lake #Airflow #Migration #Azure Data Factory #Logging #Snowflake #Spark (Apache Spark) #Apache Iceberg #Unix #Datasets
Role description
Job Description: Senior SAS Data Engineer
Remote
Role Overview
We are looking for a Senior Data Engineer to design, build, and support large-scale data pipelines and analytics solutions. The role involves supporting SAS-based analytics, modernizing data platforms using Databricks and Spark, and delivering governed, scalable data solutions aligned with enterprise standards.
Key Responsibilities
• Design, develop, test, and deploy ETL/ELT pipelines using ADF, Databricks, PySpark, Python, and SAS
• Build and optimize batch and incremental data pipelines for large-scale structured and semi-structured data
• Support and modernize SAS analytics pipelines, including migration to Spark/Databricks
• Develop data models and datasets using Delta Lake / Apache Iceberg
• Optimize data lakes on ADLS / AWS S3
• Integrate pipelines with Snowflake for analytics and reporting
• Implement data governance, access control, and lineage using Unity Catalog
• Provide production support, monitoring, root cause analysis, and issue resolution
• Collaborate with business users, architects, and analytics teams; create detailed design and SOP documentation
• Act as Databricks SME and mentor junior engineers
Required Skills & Experience
• 8+ years of experience in Data Engineering / ETL
• Strong hands-on experience with Databricks, Apache Spark, PySpark, Python
• 6+ years with Azure Data Factory (ADF) and enterprise data pipelines
• Experience with Snowflake, ADLS/AWS S3, Delta Lake, Apache Iceberg
• Strong SQL and SAS skills (SAS Data Step, Macros, PROC SQL)
• Experience with Airflow, Unix/Linux, and shell scripting
• Knowledge of CI/CD, monitoring, logging, and governance frameworks
Nice to Have
• Experience migrating SAS pipelines to Spark/Databricks
• Healthcare domain experience
• Strong documentation and mentoring experience
Job Description: Senior SAS Data Engineer
Remote
Role Overview
We are looking for a Senior Data Engineer to design, build, and support large-scale data pipelines and analytics solutions. The role involves supporting SAS-based analytics, modernizing data platforms using Databricks and Spark, and delivering governed, scalable data solutions aligned with enterprise standards.
Key Responsibilities
• Design, develop, test, and deploy ETL/ELT pipelines using ADF, Databricks, PySpark, Python, and SAS
• Build and optimize batch and incremental data pipelines for large-scale structured and semi-structured data
• Support and modernize SAS analytics pipelines, including migration to Spark/Databricks
• Develop data models and datasets using Delta Lake / Apache Iceberg
• Optimize data lakes on ADLS / AWS S3
• Integrate pipelines with Snowflake for analytics and reporting
• Implement data governance, access control, and lineage using Unity Catalog
• Provide production support, monitoring, root cause analysis, and issue resolution
• Collaborate with business users, architects, and analytics teams; create detailed design and SOP documentation
• Act as Databricks SME and mentor junior engineers
Required Skills & Experience
• 8+ years of experience in Data Engineering / ETL
• Strong hands-on experience with Databricks, Apache Spark, PySpark, Python
• 6+ years with Azure Data Factory (ADF) and enterprise data pipelines
• Experience with Snowflake, ADLS/AWS S3, Delta Lake, Apache Iceberg
• Strong SQL and SAS skills (SAS Data Step, Macros, PROC SQL)
• Experience with Airflow, Unix/Linux, and shell scripting
• Knowledge of CI/CD, monitoring, logging, and governance frameworks
Nice to Have
• Experience migrating SAS pipelines to Spark/Databricks
• Healthcare domain experience
• Strong documentation and mentoring experience



