

TalentBurst, an Inc 5000 company
Senior SQL and ETL Engineer | Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior SQL and ETL Engineer, fully remote for 12+ months, offering a competitive pay rate. Key skills include expertise in SQL, ETL tools, data modeling, and Python/PySpark, with experience in data warehouse architecture and big data technologies required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
368
-
🗓️ - Date
January 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
California
-
🧠 - Skills detailed
#Metadata #Teradata #MySQL #Datasets #EDW (Enterprise Data Warehouse) #Spark (Apache Spark) #Python #Apache Spark #Data Security #DevOps #Deployment #Scala #PostgreSQL #Snowflake #XML (eXtensible Markup Language) #Data Modeling #Data Lake #Stored Procedure Optimization #Cloud #Oracle #Security #JSON (JavaScript Object Notation) #Data Mart #PySpark #Perl #"ETL (Extract #Transform #Load)" #Data Integration #Hadoop #SQL (Structured Query Language) #REST API #ADF (Azure Data Factory) #Data Extraction #Azure Data Factory #Azure DevOps #Azure Synapse Analytics #SQL Server #Data Quality #Compliance #GitHub #Synapse #Azure #Automation #SSIS (SQL Server Integration Services) #Big Data #REST (Representational State Transfer) #GIT #Indexing #Data Warehouse
Role description
SQL and ETL Engineer Fully REMOTE 12+ Months (Possible Ext.) Skills Required 1. Strong expertise in SQL, PL/SQL, and T-SQL with advanced query tuning, stored procedure optimization, and relational data modeling across Oracle, SQL Server, PostgreSQL, and MySQL. 2. Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows. 3. Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimize models for analytics and large-scale reporting. 4. Strong understanding of data integration, data validation, cleansing, profiling, and end-to-end data quality processes to ensure accuracy and consistency across systems. 5. Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud-based ingestion frameworks. 6. Experience applying best practices for scalable, maintainable ETL engineering, including metadata-driven design and automation. 7. Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi-structured data, and transforming large datasets. 8. Experience handling structured and semi-structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion. 9. Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure. 10. Expertise in optimizing ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimization. 11. Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments. 12. Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures. Experience in developing and optimizing SQL, PL/SQL, and T-SQL logic, including stored procedures, functions, performance tuning, and advanced relational modeling across Oracle and SQL Server. Experience working with mainframe including data extraction, mapping, and conversion into modern ETL/ELT pipelines. Experience designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows. Experience in building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms. Experience working with big data technologies such as Apache Spark, PySpark, or Hadoop for large-scale data transformation. Experience integrating structured and semi-structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark. Experience supporting production ETL operations, troubleshooting pipeline failures, conducting root cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads. #TB\_EN #ZR
SQL and ETL Engineer Fully REMOTE 12+ Months (Possible Ext.) Skills Required 1. Strong expertise in SQL, PL/SQL, and T-SQL with advanced query tuning, stored procedure optimization, and relational data modeling across Oracle, SQL Server, PostgreSQL, and MySQL. 2. Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows. 3. Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimize models for analytics and large-scale reporting. 4. Strong understanding of data integration, data validation, cleansing, profiling, and end-to-end data quality processes to ensure accuracy and consistency across systems. 5. Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud-based ingestion frameworks. 6. Experience applying best practices for scalable, maintainable ETL engineering, including metadata-driven design and automation. 7. Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi-structured data, and transforming large datasets. 8. Experience handling structured and semi-structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion. 9. Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure. 10. Expertise in optimizing ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimization. 11. Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments. 12. Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures. Experience in developing and optimizing SQL, PL/SQL, and T-SQL logic, including stored procedures, functions, performance tuning, and advanced relational modeling across Oracle and SQL Server. Experience working with mainframe including data extraction, mapping, and conversion into modern ETL/ELT pipelines. Experience designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows. Experience in building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms. Experience working with big data technologies such as Apache Spark, PySpark, or Hadoop for large-scale data transformation. Experience integrating structured and semi-structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark. Experience supporting production ETL operations, troubleshooting pipeline failures, conducting root cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads. #TB\_EN #ZR






