Data Engineer (Asset Management) ($50/hr)

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
400
-
πŸ—“οΈ - Date discovered
September 9, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Irvine, CA
-
🧠 - Skills detailed
#SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Warehouse #Spark (Apache Spark) #Scala #Python #Big Data #Databricks #Cloud #Data Pipeline #Airflow #"ETL (Extract #Transform #Load)" #Version Control #PySpark #Redshift #Computer Science #Data Engineering #GitHub #BitBucket #Amazon Redshift
Role description
We are seeking an experienced Data Engineer to design, develop, and maintain scalable data pipelines and platforms. The ideal candidate will have a strong background in big data technologies, data warehousing concepts, and cloud platforms, with proven experience in delivering enterprise-grade data solutions. This role requires strong technical expertise along with excellent communication and collaboration skills, especially in a global team environment. Key Responsibilities β€’ Design, build, and optimize data pipelines using PySpark, Hive, SQL, and Python. β€’ Develop and maintain scalable solutions on AWS Cloud with exposure to Databricks. β€’ Implement and manage workflows and scheduling using Airflow (or similar orchestration tools). β€’ Work with MPP data warehouses such as SQL Data Warehouse (SQLDW) or Amazon Redshift. β€’ Apply strong data warehousing concepts in building and optimizing ETL/ELT solutions. β€’ Collaborate with cross-functional teams to gather requirements and deliver high-quality data solutions. β€’ Utilize version control tools such as GitHub or Bitbucket for collaborative development. β€’ Partner with onshore and offshore teams to ensure seamless coordination and delivery. β€’ Engage with clients and stakeholders to provide technical guidance and support. Must-Have Skills β€’ 5+ years of professional experience in data engineering. β€’ Strong hands-on expertise with PySpark, Hive, SQL, and Python. β€’ Experience with AWS Cloud and Databricks. β€’ Proficiency with Airflow or similar workflow orchestration tools. β€’ Exposure to MPP data warehouses (SQLDW, Redshift, or equivalent). β€’ Solid understanding of data warehousing concepts and ETL best practices. β€’ Experience with version control tools (GitHub, Bitbucket). β€’ Strong communication skills with proven experience in client engagement. β€’ Ability to work effectively with offshore teams and distributed stakeholders. Qualifications β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. β€’ Track record of delivering data solutions in enterprise or cloud-based environments. β€’ Excellent problem-solving and analytical skills with attention to detail.