IntraEdge

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience in data management and expertise in Google Big Query, Cloud Storage, Dataflow, Python, and SQL. Contract length and pay rate are unspecified; work location is also unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Data Processing #Dataflow #GCP (Google Cloud Platform) #Apache Beam #Data Quality #Data Engineering #Scala #Security #Data Architecture #Storage #Airflow #Google Cloud Storage #Data Management #Data Governance #Clustering #Cloud #Automation #Visualization #Data Analysis #Data Integrity #Agile #Python #Data Extraction #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Security #BigQuery #Data Pipeline
Role description
Data Engineer will closely work with cross-functional teams to ensure data integrity, reliability, and scalability. You need to have expertise in Google Big Query, Google Cloud Storage, Dataflow, Cloud Composer, Python, and SQL will be crucial in developing effective data solution that support our fraud data analytics and reporting efforts. • 8+ years of hands-on experience with data management in gathering data from multiple sources and consolidating them into a single centralized location. Transforming the data with business logic in a consumable manner for visualization and data analysis. • Strong expertise in Google Big Query, Google Cloud Storage, Dataflow, Pub/Sub, Cloud Composer, and related Google Cloud Platform (GCP) services. • Proficiency in Python and SQL for data processing and automation. • Experience with ETL processes and data pipeline design. • Design, build, and maintain scalable data pipelines using Google Cloud Platform tools such as BigQuery, Cloud Storage, Dataflow (Apache Beam), Cloud Composer (Airflow) and Pub/Sub. • Write high-performance, production-grade Python and SQL, optimizing queries to support data extraction, transformation, and loading (ETL) processes. • Implement complex data models in BigQuery, utilizing partitioning, clustering, and materialized views for optimal performance. • Collaborate with cross-functional teams, including business customers, Subject Matter Experts, to understand data requirements and deliver effective solutions. • Implement best practices for data quality, data governance and data security. • Monitor and troubleshoot data pipeline issues, ensuring high availability and performance. • Contribute to data architecture decisions to provide recommendations for improving the data pipeline. • Stay up to date with emerging trends and technologies in cloud-based data engineering and cyber security. • Exceptional communication skills, including the ability to gather relevant data and information, actively listen, dialogue freely, and verbalize ideas effectively. • Ability to work in an Agile work environment to deliver incremental value to customers by managing and prioritizing tasks. • Proactively lead investigation and resolution efforts when data issues are identified taking ownership to resolve them in a timely manner. • Ability to interoperate and document processes and procedures for producing metrics.