Senior Azure / Data Engineer with (ETL/ Data Warehouse Background)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure/Data Engineer with a long-term contract, requiring 10+ years of experience, strong SQL skills, and expertise in Azure and ETL tools. Location: Fremont, CA, Austin, TX, or Tualatin, OR; hybrid work model.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
June 6, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Fremont, CA
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Schema Design #Cloud #Programming #Azure DevOps #Azure SQL #Data Warehouse #Databricks #PySpark #Spark SQL #Data Bricks #Deployment #Amazon Redshift #"ETL (Extract #Transform #Load)" #Data Engineering #Informatica #IoT (Internet of Things) #Azure #Synapse #Azure Stream Analytics #Data Lake #Microsoft Power BI #SAP #Data Modeling #Spark (Apache Spark) #Snowflake #Migration #BI (Business Intelligence) #Talend #DevOps #GCP (Google Cloud Platform) #Big Data #Redshift
Role description
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Fremont, CA, Austin, TX and Tualatin, OR Duration: Long Term Contract It’s a Hybrid Role and needs to be Onsite 2 Days a Week Need with 10+ years of experience Must have Skills : β€’ Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. β€’ 10 + years of proven experience with SQL, schema design, and dimensional data modeling β€’ Solid knowledge of data warehouse best practices, development standards, and methodologies β€’ Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. β€’ Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. β€’ Be an independent self-learner with a β€œlet’s get this done” approach and the ability to work in Fast paced and Dynamic environment. β€’ Excellent communication and teamwork abilities. Nice-to-Have Skills: β€’ Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. β€’ SAP ECC /S/4 and Hana knowledge. β€’ Intermediate knowledge on Power BI β€’ Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Best Regards, Santosh Cherukuri Email: scherukuri@bayonesolutions.com