IntraEdge

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience in data management, specializing in Google Big Query, Cloud Storage, Dataflow, Python, and SQL. Contract length and pay rate are unspecified; location is remote.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Storage #Apache Beam #"ETL (Extract #Transform #Load)" #Cloud #Data Integrity #GCP (Google Cloud Platform) #Google Cloud Storage #Airflow #Dataflow #Data Management #Security #Data Security #SQL (Structured Query Language) #BigQuery #Python #Automation #Data Pipeline #Visualization #Data Architecture #Data Governance #Data Analysis #Scala #Data Extraction #Data Quality #Clustering #Data Engineering #Data Processing #Agile
Role description
Data Engineer will closely work with cross-functional teams to ensure data integrity, reliability, and scalability. You need to have expertise in Google Big Query, Google Cloud Storage, Dataflow, Cloud Composer, Python, and SQL will be crucial in developing effective data solution that support our fraud data analytics and reporting efforts. • 8+ years of hands-on experience with data management in gathering data from multiple sources and consolidating them into a single centralized location. Transforming the data with business logic in a consumable manner for visualization and data analysis. • Strong expertise in Google Big Query, Google Cloud Storage, Dataflow, Pub/Sub, Cloud Composer, and related Google Cloud Platform (GCP) services. • Proficiency in Python and SQL for data processing and automation. • Experience with ETL processes and data pipeline design. • Design, build, and maintain scalable data pipelines using Google Cloud Platform tools such as BigQuery, Cloud Storage, Dataflow (Apache Beam), Cloud Composer (Airflow) and Pub/Sub. • Write high-performance, production-grade Python and SQL, optimizing queries to support data extraction, transformation, and loading (ETL) processes. • Implement complex data models in BigQuery, utilizing partitioning, clustering, and materialized views for optimal performance. • Collaborate with cross-functional teams, including business customers, Subject Matter Experts, to understand data requirements and deliver effective solutions. • Implement best practices for data quality, data governance and data security. • Monitor and troubleshoot data pipeline issues, ensuring high availability and performance. • Contribute to data architecture decisions to provide recommendations for improving the data pipeline. • Stay up to date with emerging trends and technologies in cloud-based data engineering and cyber security. • Exceptional communication skills, including the ability to gather relevant data and information, actively listen, dialogue freely, and verbalize ideas effectively. • Ability to work in an Agile work environment to deliver incremental value to customers by managing and prioritizing tasks. • Proactively lead investigation and resolution efforts when data issues are identified taking ownership to resolve them in a timely manner. • Ability to interoperate and document processes and procedures for producing metrics.