Element Technologies Inc

Alteryx Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Alteryx Data Engineer" in Chicago, IL, onsite for 5 days. Contract length is unspecified, with a pay rate of "unknown." Requires 12+ years of experience, strong skills in Dataiku, SQL, Python, PySpark, and AWS.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 19, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Science #Lambda (AWS Lambda) #Scala #Data Processing #Data Quality #Data Engineering #Dataiku #S3 (Amazon Simple Storage Service) #Python #Compliance #Data Pipeline #Cloud #PySpark #Documentation #Redshift #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #AWS (Amazon Web Services) #SQL (Structured Query Language) #Monitoring #Alteryx
Role description
Data Engineer – Alteryx, Dataiku, PySpark & AWS Chicago, IL Onsite - 5 days Experience: minimum 12 years Role Overview The Data Engineer will be responsible for designing, developing and maintaining scalable, high‑performance data solutions using the Dataiku platform and modern big‑data technologies. This role requires strong hands‑on experience with SQL, Python, and PySpark, along with a solid understanding of AWS cloud services. You will collaborate closely with cross‑functional teams to translate business requirements into robust data pipelines and ensure data quality, reliability and accessibility across the organization. Key Responsibilities - Design, build and deploy scalable data pipelines using the Dataiku platform, ensuring efficiency, reliability and long‑term maintainability. - Develop and optimize data processing workflows using SQL, Python and PySpark, including large‑scale transformations and distributed data processing. - Leverage AWS services (e.g., S3, Lambda, Glue, EMR, Redshift) to build cloud‑native data solutions and support pipeline orchestration. - Collaborate with cross‑functional teams—including data scientists, analysts and business stakeholders—to gather requirements and translate them into technical specifications. - Prepare clear and comprehensive documentation for data pipelines, workflows and best practices to support operational excellence. - Implement data quality checks, validation rules and monitoring to ensure accuracy, consistency and compliance with governance standards. - Communicate effectively with both technical and non‑technical audiences to explain data concepts, project updates, and solution designs. Required Skills & Qualifications - Strong hands‑on experience with Dataiku for building and managing end‑to‑end data workflows. - Proficiency in SQL and Python with the ability to write clean, efficient and reusable code. - Solid experience with PySpark for distributed data processing and large‑scale transformations. - Working knowledge of AWS cloud services and their application in data engineering. - Strong communication and collaboration skills, with the ability to work across teams and clarify requirements. - Ability to create and maintain high‑quality technical documentation.