

Intellectt Inc
Big Data Architect-W2 Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Architect/Data Pipeline Engineer, remote for 7 months on W2. Requires 7+ years in enterprise data engineering, expertise in Databricks, Azure SQL, and healthcare data integration. Skills in ETL/ELT design and performance tuning are essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 18, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Scala #Data Architecture #Normalization #Spark (Apache Spark) #PySpark #Azure #SQL Server #Data Integration #Kubernetes #Azure SQL #Python #Data Quality #Databricks #Security #Data Pipeline #Big Data #SQL (Structured Query Language) #Docker #Data Modeling #Storage #Data Engineering #"ETL (Extract #Transform #Load)"
Role description
Big Data Architect/ Data Pipeline Engineer
Location: Remote
Duration: 7 months on W2
Required Skills
• Databricks (Python, SQL, PySpark)
• Azure SQL Server (Managed Instance, Azure SQL DB, SQL Server VMs)
• Data Modeling and ETL/ELT design patterns
• Data Engineering (enterprise-scale)
• Performance tuning and scalability
• Docker and Azure Kubernetes Service (AKS) – preferred
Experience
• 7+ years of enterprise data engineering experience
• Strong experience with large-scale data pipelines and architectures
• Healthcare data integration experience is a must
• Data harmonization and normalization at scale
Responsibilities
• Design and build scalable data pipelines and big data architectures
• Develop and optimize Databricks-based data solutions
• Implement data integration, processing, and storage solutions
• Ensure data quality, security, and governance standards
• Optimize solutions for performance, scalability, and cost
• Collaborate with stakeholders to translate business needs into technical solutions
Big Data Architect/ Data Pipeline Engineer
Location: Remote
Duration: 7 months on W2
Required Skills
• Databricks (Python, SQL, PySpark)
• Azure SQL Server (Managed Instance, Azure SQL DB, SQL Server VMs)
• Data Modeling and ETL/ELT design patterns
• Data Engineering (enterprise-scale)
• Performance tuning and scalability
• Docker and Azure Kubernetes Service (AKS) – preferred
Experience
• 7+ years of enterprise data engineering experience
• Strong experience with large-scale data pipelines and architectures
• Healthcare data integration experience is a must
• Data harmonization and normalization at scale
Responsibilities
• Design and build scalable data pipelines and big data architectures
• Develop and optimize Databricks-based data solutions
• Implement data integration, processing, and storage solutions
• Ensure data quality, security, and governance standards
• Optimize solutions for performance, scalability, and cost
• Collaborate with stakeholders to translate business needs into technical solutions






