American Unit, Inc

Data Engineer - Snowflake + PySpark - Only W2

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month contract, remote work preference for candidates in Atlanta, Dallas, or Miami. Requires 4+ years of experience, strong skills in Python, PySpark, SQL, and Snowflake, and a relevant Master's degree or Azure certifications.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 20, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Azure Data Factory #Programming #Documentation #Snowflake #Scala #"ETL (Extract #Transform #Load)" #PySpark #Computer Science #Python #Azure #Data Processing #ADF (Azure Data Factory) #API (Application Programming Interface) #Data Engineering #Statistics #Azure cloud #SQL (Structured Query Language) #DevOps #Airflow #Datasets #Spark (Apache Spark) #Agile #AWS Glue #Redshift #Snowpark #SQL Queries #Cloud #Data Pipeline #AWS (Amazon Web Services)
Role description
Data Engineer Location: Remote (Strong preference for candidates based in Atlanta, Dallas, or Miami) Duration: 12 Months, with eligibility for full-time conversion at 6 months About the Role We’re looking for a Data Engineer to join a multidisciplinary Agile team supporting large-scale data initiatives for Verizon. This role focuses on designing, building, and maintaining the data systems and pipelines that enable advanced analytics and data-driven decision-making across the organization. What You’ll Do β€’ Partner with software engineers, business stakeholders, and subject matter experts to translate requirements into scalable data solutions. β€’ Develop, implement, and deploy ETL pipelines and workflows. β€’ Preprocess and analyze large datasets to uncover meaningful insights. β€’ Validate, refine, and optimize data models for performance and reliability. β€’ Monitor and maintain data pipelines in production, identifying improvements and refining workflows. β€’ Document development processes, workflows, and best practices to support team knowledge sharing. What We’re Looking For Education & Experience β€’ Master’s degree in Computer Science, Engineering, Statistics, or a related field β€’ OR β€’ Associate-level Azure certifications paired with strong hands-on experience. β€’ Minimum of 4 years of Data Engineering experience, ideally within large or enterprise environments. Technical Skills β€’ Strong programming proficiency in Python, PySpark, and SQL. β€’ Experience building and optimizing ETL workflows using tools such as Spark, Snowflake, Airflow, Azure Data Factory, AWS Glue, or Redshift. β€’ Ability to craft and optimize complex SQL queries and stored procedures. β€’ Experience developing and maintaining scalable, high-performing data models. β€’ Hands-on expertise with Snowflake, including Snowpark for data processing. β€’ Exposure to API integrations to support data workflows. β€’ Experience implementing CI/CD pipelines through DevOps platforms. β€’ Solid understanding of Azure cloud infrastructure and services. Ideal Candidate Profile β€’ Strong ETL developer with hands-on Snowflake + PySpark experience. β€’ Skilled in building production-grade, scalable data pipelines. β€’ Comfortable collaborating across technical and business teams in an Agile environment. β€’ Detail-oriented with strong documentation and communication skills.