Saxon Global

Senior Data Engineer & Test

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer & Test in Phoenix, AZ, with a contract length of "unknown" and a pay rate of "unknown." Requires 10+ years IT experience, 5+ years in data engineering with Python and PySpark, and expertise in Airflow and CI/CD processes.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 9, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Data Pipeline #Scala #Deployment #Spark (Apache Spark) #Batch #Data Architecture #Docker #PySpark #Python #GIT #Automated Testing #Containers #Documentation #Data Ingestion #SQL (Structured Query Language) #Cloud #Airflow #Version Control #Programming #Linux #Code Reviews #Data Engineering #Computer Science #Unix #Agile #GitLab #GitHub #GCP (Google Cloud Platform)
Role description
The Senior Data Engineer & Test in Phoenix 85029 will play a pivotal role in delivering major data engineering initiatives within the Data & Advanced Analytics space. This position requires hands-on expertise in building, deploying, and maintaining robust data pipelines using Python, PySpark, and Airflow, as well as designing and implementing CI/CD processes for data engineering projects Key Responsibilities 1. Data Engineering: Design, develop, and optimize scalable data pipelines using Python and PySpark for batch and streaming workloads. 1. Workflow Orchestration: Build, schedule, and monitor complex workflows using Airflow, ensuring reliability and maintainability. 1. CI/CD Pipeline Development: Architect and implement CI/CD pipelines for data engineering projects using GitHub, Docker, and cloud-native solutions. 1. Testing & Quality: Apply test-driven development (TDD) practices and automate unit/integration tests for data pipelines. 1. Secure Development: Implement secure coding best practices and design patterns throughout the development lifecycle. 1. Collaboration: Work closely with Data Architects, QA teams, and business stakeholders to translate requirements into technical solutions. 1. Documentation: Create and maintain technical documentation, including process/data flow diagrams and system design artifacts. 1. Mentorship: Lead and mentor junior engineers, providing guidance on coding, testing, and deployment best practices. 1. Troubleshooting: Analyze and resolve technical issues across the data stack, including pipeline failures and performance bottlenecks. Cross-Team Knowledge Sharing: Cross-train team members outside the project team (e.g., operations support) for full knowledge coverage. Includes all above skills, plus the following; Β· Minimum of 10+ years overall IT experience Β· Experienced in waterfall, iterative, and agile methodologies Technical Requirment: 1. Hands-on Data Engineering : Minimum 5+ yearsof practical experience building production-grade data pipelines using Python and PySpark. 1. Airflow Expertise: Proven track record of designing, deploying, and managing Airflow DAGs in enterprise environments. 1. CI/CD for Data Projects : Ability to build and maintain CI/CD pipelinesfor data engineering workflows, including automated testing and deployment β€’ β€’ . 1. Cloud & Containers: Experience with containerization (Docker and cloud platforms (GCP) for data engineering workloads. Appreciation for twelve-factor design principles 1. Python Fluency : Ability to write object-oriented Python code manage dependencies, and follow industry best practices 1. Version Control: Proficiency with β€’ β€’ Git β€’ β€’ for source code management and collaboration (commits, branching, merging, GitHub/GitLab workflows). 1. Unix/Linux: Strong command-line skills β€’ β€’ in Unix-like environments. 1. SQL : Solid understanding of SQL for data ingestion and analysis. 1. Collaborative Development : Comfortable with code reviews, pair programming and usingremote collaboration tools effectively. 1. Engineering Mindset: Writes code with an eye for maintainability and testability; excited to build production-grade software 1. Education: Bachelor’s or graduate degree in Computer Science, Data Analytics or related field, or equivalent work experience.