Snowflake Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Snowflake Data Engineer in Richmond, Virginia, with a contract length of "unknown" and a pay rate of "unknown." Requires 5-10 years of data engineering experience, strong skills in PySpark, Scala, Python, and familiarity with cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Richmond, VA
-
🧠 - Skills detailed
#Data Pipeline #Computer Science #Datasets #Python #Data Science #Azure #Agile #Kafka (Apache Kafka) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Consulting #Snowflake #Data Engineering #SQL (Structured Query Language) #PySpark #Hadoop #Big Data #Programming #Cloud #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #Scala #Data Ingestion #Version Control #GIT #Kubernetes #AWS (Amazon Web Services) #Data Governance #Distributed Computing #Docker #Code Reviews #Security #Data Modeling #Data Processing #Data Quality #Databricks #Scrum
Role description
Who We Are Artmac Soft is a technology consulting and service-oriented IT company dedicated to providing innovative technology solutions and services to Customers. Job Description Job Title : Snowflake Data Engineer Job Type : W2 Experience : 5-10 Years Location : Richmond, Virginia Responsibilities β€’ A minimum of 5 years of experience in Data engineering. β€’ Strong programming experience with PySpark, Scala, and Python. β€’ Hands-on experience in Spark-based data processing on large datasets. β€’ Experience with distributed computing frameworks (e.g., Hadoop, Spark). β€’ Solid understanding of data warehousing concepts, data modeling, and ETL development. β€’ Experience with SQL and performance tuning for large-scale datasets. β€’ Familiarity with cloud platforms (AWS, Azure, GCP) and services like S3, EMR, Databricks, or equivalent. β€’ Good knowledge of version control systems like Git and CI/CD pipelines. β€’ Knowledge of Docker, Kubernetes, or similar container technologies. β€’ Experience working in Agile/Scrum environments. β€’ Design, develop, and maintain scalable ETL/ELT data pipelines using PySpark, Scala, and Python. β€’ Work with big data technologies and frameworks such as Hadoop, Spark, Hive, and Kafka. β€’ Develop, deploy, and maintain efficient data ingestion and transformation logic for structured and unstructured data. β€’ Optimize the performance of large-scale data processing applications. β€’ Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements. β€’ Ensure data quality, data governance, and security standards are adhered to. β€’ Participate in code reviews and provide constructive feedback. β€’ Monitor pipeline health, performance, and proactively address issues. Education β€’ Bachelor's degree in Computer Science or related field.