Data Engineer (GCP, SQL, BI Reporting, PySpark)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (GCP, SQL, BI Reporting, PySpark) with a 6-month W2 contract in Sunnyvale, CA, offering $55-$62/hr. Requires 7+ years of experience, big tech background, and proficiency in SQL, Python, and data pipeline development.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
496
-
πŸ—“οΈ - Date discovered
June 7, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
San Jose, CA
-
🧠 - Skills detailed
#BI (Business Intelligence) #Data Analysis #PostgreSQL #AWS (Amazon Web Services) #Hadoop #Datasets #Database Management #Data Quality #SQL (Structured Query Language) #Programming #Databases #Security #Azure #Snowflake #Documentation #Big Data #GCP (Google Cloud Platform) #Scala #Data Engineering #Spark (Apache Spark) #Python #Data Security #Data Science #PySpark #Data Integration #Java #MySQL #Redshift #"ETL (Extract #Transform #Load)" #Computer Science #Data Pipeline #Cloud
Role description
FINTECH Company - W2 Contract 6 months w/ extension Data Engineer - Sunnyvale, Ca Pay Rate: $55hr-$62hr β€’ 7+ years of experience as a Data Analyst or Data Engineer β€’ Must have worked with (GCP, SQL, BI Reporting, PySpark) β€’ Big tech background Job Description: We are seeking a talented and motivated Data Engineer to join our dynamic team in San Jose, CA. As a Data Engineer, you will play a crucial role in designing, building, and maintaining our data infrastructure to support our innovative financial technology solutions. Key Responsibilities: β€’ Data Pipeline Development: Design, develop, and maintain scalable data pipelines to process and analyze large datasets. β€’ Data Integration: Integrate data from various sources, ensuring data quality and consistency. β€’ Database Management: Optimize and manage databases, ensuring high performance and availability. β€’ ETL Processes: Develop and maintain ETL (Extract, Transform, Load) processes to support data warehousing and analytics. β€’ Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. β€’ Data Security: Implement and maintain data security measures to protect sensitive information. β€’ Documentation: Create and maintain comprehensive documentation for data processes and systems. Qualifications: β€’ Education: Bachelor’s degree in Computer Science, Engineering, or a related field. β€’ Experience: 7+ years of experience in data engineering or a related role. β€’ Technical Skills: β€’ Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL). β€’ Strong programming skills in Python or Java. β€’ Experience with big data technologies (e.g., Hadoop, Spark). β€’ Familiarity with cloud platforms (e.g., AWS, GCP, Azure). β€’ Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift). β€’ Soft Skills: β€’ Excellent problem-solving and analytical skills. β€’ Strong communication and collaboration abilities. β€’ Ability to work in a fast-paced, dynamic environment. Benefits: β€’ Comprehensive health, dental, and vision insurance. β€’ 401(k) plan with company match. β€’ Flexible work hours and remote work options. β€’ Professional development opportunities.