Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 9-month W2 contract, working remotely during MST hours. Key skills include PySpark, Python, SQL, and cloud experience (Azure). Healthcare domain experience is preferred. Strong data engineering and architectural capabilities are essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
616
-
πŸ—“οΈ - Date discovered
September 30, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #Snowflake #Azure Blob Storage #Security #Data Quality #Computer Science #Data Architecture #Data Pipeline #Storage #Scala #Azure Data Factory #Data Engineering #PySpark #Data Encryption #Python #SQL (Structured Query Language) #Programming #Spark (Apache Spark) #Documentation #Version Control #Data Integration #Database Design #Azure #Migration #Compliance #GIT #Data Modeling #Cloud
Role description
Data Engineer Remote - MST Hours 9-month contract (W2) - Please apply only if you can work on a W2 contract basis β€’ β€’ Must be able to work on Mountain Standard Time β€’ β€’ Position Overview: Our healthcare client is seeking a seasoned Data Engineer with strong experience in architecture and programming, particularly using PySpark. This role is a backfill for a high-performing consultant and will focus on building scalable data solutions within a healthcare context. While Snowflake experience is valuable, the emphasis is on engineering and architectural capabilities. Qualifications: β€’ Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. β€’ 5–10 years of IT experience, with a strong background in data engineering and architecture. β€’ Healthcare domain experience is highly preferred. β€’ Proficiency in PySpark and Python programming. β€’ Strong SQL skills and experience with data modeling and database design. β€’ Experience with cloud platforms, especially Azure. β€’ Familiarity with Snowflake, Azure Data Factory, Azure Blob Storage, and CI/CD processes. β€’ Experience with data integration tools and technologies. β€’ Ability to reverse engineer existing processes and convert them into efficient data pipelines. β€’ Strong communication and collaboration skills. Responsibilities: β€’ Architect and develop scalable data pipelines using PySpark and other modern data engineering tools. β€’ Collaborate with data architects to design robust data models and schemas. β€’ Optimize data integration processes for performance and reliability. β€’ Lead efforts in cloud migration and modernization, especially from on-prem to cloud platforms. β€’ Write efficient, maintainable code in Python, PySpark, and SQL. β€’ Implement data quality checks and validation processes. β€’ Ensure security and compliance through role-based access control (RBAC) and data encryption. β€’ Maintain technical documentation and version control using Git. β€’ Work closely with cross-functional teams to understand data requirements and deliver solutions. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.