Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Phoenix, AZ, lasting 6-12 months, with a pay rate of "unknown." Key skills include master proficiency in Google BigQuery and data pipelines, along with expert proficiency in PySpark and project management.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Quality #Data Processing #Spark (Apache Spark) #Data Access #Data Pipeline #BigQuery #PySpark #Data Engineering #Project Management
Role description
Title: Data Engineer Duration: Phoenix, AZ Location: 6-12 Months Responsibilities: As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your day-to-day activities will include creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You are expected to be a subject matter expert, collaborate and manage the team to perform effectively. You will be responsible for team decisions, engage with multiple teams, and contribute to key decisions. Additionally, you are expected to provide solutions to problems that apply across multiple teams. Master proficiency in Google BigQuery is required. Expert proficiency in PySpark is recommended. Master proficiency in Data Pipelines, Expert proficiency in Data Processing, and Expert proficiency in Program Project Management are suggested. β€’ Develop innovative data solutions that enhance data accessibility and usability. β€’ Collaborate with cross-functional teams to identify and implement data-driven strategies. β€’ Monitor and optimize data pipeline performance to ensure efficiency and reliability. β€’ Lead initiatives to improve data quality and integrity across all systems.