

Galaxy i technologies Inc
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Azure Databricks) in Seattle, WA, on a W2 contract for over 6 months, offering competitive pay. Requires 10+ years of experience, expertise in Python/PySpark, SQL, Databricks, and Azure cloud architecture.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 8, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Azure #Programming #Spark (Apache Spark) #PySpark #"ETL (Extract #Transform #Load)" #Data Extraction #DevOps #Storage #Azure Databricks #Cloud #Data Lake #Python #Data Quality #Delta Lake #Data Integration #Strategy #Azure DevOps #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Data Architecture #Databricks #Databases #Scala #ADF (Azure Data Factory) #NoSQL #Data Governance #Azure Data Factory #Data Storage #Spark SQL
Role description
Hi, Everyone
•
•
•
•
•
• W2 CONTRACT ONLY
•
•
• W2 CONTRACT ONLY
•
•
• W2 CONTRACT ONLY
•
•
•
•
•
• 100% Closure & Long-term project, Immediate Interview Surely
Job Title: Senior Data Engineer (Azure Databricks)
Location: Seattle, WA
Contract : w2 Contract
Note :10 + years exp ,
Job Details :
Responsibilities:
• Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files.
• Develop and support data pipelines within a Cloud Data Platform, such as Databricks
• Build data models that reflect domain expertise, meet current business needs, and will remain flexible as strategy evolves
• Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
• Demonstrates ability to communicate technical concepts to non-technical audiences both in written and verbal form.
• Demonstrates strong understanding with coding and programming concepts to build data pipelines (e.g. data transformation, data quality, data integration, etc.).
• Demonstrates strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing).
• Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning
• Automate CI/CD pipelines for data workflows using Azure DevOps
• Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Qualifications:
• 7+ years of experience in data engineering or a related field
• Expertise with programming languages such as Python/PySpark, SQL, or Scala
• Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
• Hands-on experience with Databricks Cloud Data Platforms Required
• Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines) Required
Thanks,
Mahesh
vmahesh@galaxyitech.com
NOTE: Please share your updated resume to vmahesh@galaxyitech.com or can reach out me at 480-407-6915
Hi, Everyone
•
•
•
•
•
• W2 CONTRACT ONLY
•
•
• W2 CONTRACT ONLY
•
•
• W2 CONTRACT ONLY
•
•
•
•
•
• 100% Closure & Long-term project, Immediate Interview Surely
Job Title: Senior Data Engineer (Azure Databricks)
Location: Seattle, WA
Contract : w2 Contract
Note :10 + years exp ,
Job Details :
Responsibilities:
• Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files.
• Develop and support data pipelines within a Cloud Data Platform, such as Databricks
• Build data models that reflect domain expertise, meet current business needs, and will remain flexible as strategy evolves
• Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
• Demonstrates ability to communicate technical concepts to non-technical audiences both in written and verbal form.
• Demonstrates strong understanding with coding and programming concepts to build data pipelines (e.g. data transformation, data quality, data integration, etc.).
• Demonstrates strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing).
• Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning
• Automate CI/CD pipelines for data workflows using Azure DevOps
• Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Qualifications:
• 7+ years of experience in data engineering or a related field
• Expertise with programming languages such as Python/PySpark, SQL, or Scala
• Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
• Hands-on experience with Databricks Cloud Data Platforms Required
• Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines) Required
Thanks,
Mahesh
vmahesh@galaxyitech.com
NOTE: Please share your updated resume to vmahesh@galaxyitech.com or can reach out me at 480-407-6915






