

Saransh Inc
Data Architect - Pyspark
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect - PySpark, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include 14+ years in IT, expert-level PySpark, Kafka, Snowflake, and experience with AWS in hybrid environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reading, PA
-
🧠 - Skills detailed
#Tableau #Python #MDM (Master Data Management) #Data Warehouse #Scrum #QlikView #Data Framework #Data Ingestion #Batch #PySpark #Agile #Spark (Apache Spark) #ERWin #Data Mart #Data Architecture #Data Engineering #Scala #"ETL (Extract #Transform #Load)" #Talend #Data Lake #Metadata #Data Governance #SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Pipeline #Kafka (Apache Kafka) #Data Strategy #Cloud #NoSQL #Qlik #Strategy #Snowflake
Role description
Job Summary
Seeking a Principal Data Engineer with strong hands-on expertise in PySpark, Kafka, and Snowflake to design, build, and lead large-scale enterprise data platforms. The role involves driving data engineering architecture, delivering high-volume data solutions, and acting as a senior technical lead across hybrid (on-premise and cloud, preferably AWS) environments.
Key Responsibilities
• Design and develop scalable batch and streaming data pipelines using PySpark, Kafka, and Snowflake.
• Define and implement data & analytics architecture across on-premise and cloud platforms.
• Build and manage data lakes, data warehouses, and data marts.
• Lead data ingestion, integration, transformation, and orchestration processes.
• Implement data governance, lineage, metadata, MDM, and reference data frameworks.
• Design and validate data models using Erwin / ER Studio for SQL and NoSQL systems.
• Act as primary technical lead for critical programs and integrations.
• Collaborate with Agile teams and senior stakeholders to align data strategy with business goals.
Required Skills
• 14+ years in IT with 12+ years of hands-on data engineering experience.
• Expert-level PySpark, Kafka (streaming), and Snowflake.
• Strong experience with Python, SQL, NoSQL, Talend, and BODS.
• Proven delivery of large-scale, high-volume data platforms.
• Experience with hybrid cloud architectures (AWS preferred).
• 7+ years working in Agile/Scrum environments.
Nice to Have
• Experience with QlikView, Qlik Sense, Tableau.
• Enterprise data governance and analytics platform exposure.
Job Summary
Seeking a Principal Data Engineer with strong hands-on expertise in PySpark, Kafka, and Snowflake to design, build, and lead large-scale enterprise data platforms. The role involves driving data engineering architecture, delivering high-volume data solutions, and acting as a senior technical lead across hybrid (on-premise and cloud, preferably AWS) environments.
Key Responsibilities
• Design and develop scalable batch and streaming data pipelines using PySpark, Kafka, and Snowflake.
• Define and implement data & analytics architecture across on-premise and cloud platforms.
• Build and manage data lakes, data warehouses, and data marts.
• Lead data ingestion, integration, transformation, and orchestration processes.
• Implement data governance, lineage, metadata, MDM, and reference data frameworks.
• Design and validate data models using Erwin / ER Studio for SQL and NoSQL systems.
• Act as primary technical lead for critical programs and integrations.
• Collaborate with Agile teams and senior stakeholders to align data strategy with business goals.
Required Skills
• 14+ years in IT with 12+ years of hands-on data engineering experience.
• Expert-level PySpark, Kafka (streaming), and Snowflake.
• Strong experience with Python, SQL, NoSQL, Talend, and BODS.
• Proven delivery of large-scale, high-volume data platforms.
• Experience with hybrid cloud architectures (AWS preferred).
• 7+ years working in Agile/Scrum environments.
Nice to Have
• Experience with QlikView, Qlik Sense, Tableau.
• Enterprise data governance and analytics platform exposure.





