

Insight Global
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of $65/hr to $75/hr. It requires 5+ years of experience, advanced SQL, Python, and proficiency in Google BigQuery. Remote work is available.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
January 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Modeling #BigQuery #Data Lineage #"ETL (Extract #Transform #Load)" #Storage #Data Warehouse #Cloud #HTML (Hypertext Markup Language) #API (Application Programming Interface) #Python #Data Quality #Snowflake #Strategy #SQL (Structured Query Language) #Data Engineering #Airflow #Monitoring #Security #Documentation #Scala #Deployment #Datasets #Tableau #Clustering
Role description
JOB DESCRIPTION
Insight Global is seeking a fully remote hands‑on Data Engineer. This person will sit fully remote working East Coast hours for our client in New York City. This person will be responsible for building and owning the data foundation behind revenue, pricing, promotions, and inventory decisions for a ticketing program. You’ll design the data warehouse and pipelines that convert messy, scheduled reports and HTML/CSV files into clean, reliable datasets that power pricing strategy, sales analytics, and inventory visibility across shows and venues. This person with use analytics to turn raw sales signals into automated, trustworthy tables used by Tableau dashboards and decision models.
Responsibilities:
• Architect & implement the data platform: Stand up cloud data warehousing (preferably Google BigQuery) and define storage, partitioning, and modeling standards for sales, promotions, and inventory tables (e.g., star/snowflake schemas).
• Build ingestion & transformation pipelines: Create robust, scheduled ETL/ELT jobs that ingest data from Google Drive/CSV/HTML and other sources; normalize and enrich datasets; and publish curated marts for analytics and pricing.
• Automate manual processes: Replace ad‑hoc, manual pulls with reliable, monitored pipelines; implement job orchestration, alerting, and data‑quality checks (e.g., freshness, completeness, referential integrity).
• Enable pricing & promo strategy: Provide fast, accurate tables that support dynamic pricing, discounting, and campaign outcomes; surface inventory positions by show/date/section to guide strategy
• Partner with analytics & business users: Collaborate with revenue leaders and analysts using Tableau/Excel to define SLAs, data contracts, and semantic layers; deliver well‑documented datasets that are easy to consume.
• Productionize & operate: Own deployment, monitoring, and incident response for pipelines; optimize SQL and storage costs in BigQuery; continuously improve performance and reliability.
• Security & governance: Implement access controls, data lineage, and audit trails; establish naming conventions and versioning for transformations.
Compensation:
$65/hr to $75/hr. Exact compensation may vary based on several factors, including skills, experience, and education.
Employees in this role will enjoy a comprehensive benefits package starting on day one of employment, including options for medical, dental, and vision insurance. Eligibility to enroll in the 401(k) retirement plan begins after 90 days of employment. Additionally, employees in this role will have access to paid sick leave and other paid time off benefits as required under the applicable law of the worksite location.
REQUIRED SKILLS AND EXPERIENCE
• 5+ years in data engineering building production pipelines and warehouses at scale.
• Advanced SQL (window functions, CTEs, performance tuning) and practical Python (ETL/ELT, parsing HTML/CSV, API/file handling, testing).
• Proven experience with Google BigQuery (or equivalent columnar cloud warehouse) including partitioning, clustering, and cost/performance optimization.
• Experience ingesting non‑API data sources (scheduled reports, HTML/CSV files) and turning them into clean, reliable tables.
• Strong understanding of data modeling (star/snowflake), data quality (validation, reconciliation), and orchestration (e.g., Airflow/Cloud Composer or similar). Ability to translate pricing/inventory business needs into scalable dataset designs; excellent documentation and stakeholder communication.
JOB DESCRIPTION
Insight Global is seeking a fully remote hands‑on Data Engineer. This person will sit fully remote working East Coast hours for our client in New York City. This person will be responsible for building and owning the data foundation behind revenue, pricing, promotions, and inventory decisions for a ticketing program. You’ll design the data warehouse and pipelines that convert messy, scheduled reports and HTML/CSV files into clean, reliable datasets that power pricing strategy, sales analytics, and inventory visibility across shows and venues. This person with use analytics to turn raw sales signals into automated, trustworthy tables used by Tableau dashboards and decision models.
Responsibilities:
• Architect & implement the data platform: Stand up cloud data warehousing (preferably Google BigQuery) and define storage, partitioning, and modeling standards for sales, promotions, and inventory tables (e.g., star/snowflake schemas).
• Build ingestion & transformation pipelines: Create robust, scheduled ETL/ELT jobs that ingest data from Google Drive/CSV/HTML and other sources; normalize and enrich datasets; and publish curated marts for analytics and pricing.
• Automate manual processes: Replace ad‑hoc, manual pulls with reliable, monitored pipelines; implement job orchestration, alerting, and data‑quality checks (e.g., freshness, completeness, referential integrity).
• Enable pricing & promo strategy: Provide fast, accurate tables that support dynamic pricing, discounting, and campaign outcomes; surface inventory positions by show/date/section to guide strategy
• Partner with analytics & business users: Collaborate with revenue leaders and analysts using Tableau/Excel to define SLAs, data contracts, and semantic layers; deliver well‑documented datasets that are easy to consume.
• Productionize & operate: Own deployment, monitoring, and incident response for pipelines; optimize SQL and storage costs in BigQuery; continuously improve performance and reliability.
• Security & governance: Implement access controls, data lineage, and audit trails; establish naming conventions and versioning for transformations.
Compensation:
$65/hr to $75/hr. Exact compensation may vary based on several factors, including skills, experience, and education.
Employees in this role will enjoy a comprehensive benefits package starting on day one of employment, including options for medical, dental, and vision insurance. Eligibility to enroll in the 401(k) retirement plan begins after 90 days of employment. Additionally, employees in this role will have access to paid sick leave and other paid time off benefits as required under the applicable law of the worksite location.
REQUIRED SKILLS AND EXPERIENCE
• 5+ years in data engineering building production pipelines and warehouses at scale.
• Advanced SQL (window functions, CTEs, performance tuning) and practical Python (ETL/ELT, parsing HTML/CSV, API/file handling, testing).
• Proven experience with Google BigQuery (or equivalent columnar cloud warehouse) including partitioning, clustering, and cost/performance optimization.
• Experience ingesting non‑API data sources (scheduled reports, HTML/CSV files) and turning them into clean, reliable tables.
• Strong understanding of data modeling (star/snowflake), data quality (validation, reconciliation), and orchestration (e.g., Airflow/Cloud Composer or similar). Ability to translate pricing/inventory business needs into scalable dataset designs; excellent documentation and stakeholder communication.






