Strategic Staffing Solutions

Data Engineer/Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer/Architect in Columbus, OH, on a 12-month contract at $70/hr W2. Key skills include enterprise-scale data lake architecture, cloud storage design, and experience with columnar formats. Preferred experience with Google Cloud Platform is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
January 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Columbus, OH
-
🧠 - Skills detailed
#Data Quality #GCP (Google Cloud Platform) #BigQuery #BI (Business Intelligence) #Clustering #Data Architecture #Cloud #Data Migration #Data Lifecycle #Documentation #"ETL (Extract #Transform #Load)" #Data Storage #Migration #Data Engineering #Storage #Data Governance #Data Science #Datasets #Data Ingestion #Scala #Data Lake #Data Management #Metadata
Role description
Data Engineer/Architect Location: Columbus, OH (Hybrid) Employment Type: 12 Month Contract Pay: $70/hr W2 ONLY, NO C2C Overview We are seeking a Senior Data Engineer/Architect to design and build scalable, high-performance data lake environments that support enterprise analytics, reporting, and data science initiatives. This role will lead the architectural design of cloud-based data storage, layered data models, and curated datasets optimized for business intelligence and downstream consumption. The ideal candidate brings deep experience in modern data lake patterns, cloud storage design, and large-scale data organization. Key Responsibilities • Design and implement layered data lake architectures (e.g., Bronze/Silver/Gold or similar multi-zone models). • Define and maintain cloud storage (GCS or equivalent) standards, including bucket layout, naming conventions, lifecycle management, and access controls. • Establish best practices for columnar data formats such as Parquet, Avro, and ORC, including compression and performance optimization. • Design and implement partitioning strategies to optimize query performance and cost efficiency. • Plan and manage large-scale backfills and historical data migrations. • Develop curated data models optimized for analytics, BI tools, and downstream data consumers. • Partner with data engineering, analytics, and governance teams to ensure alignment with enterprise data standards. • Evaluate data growth patterns and implement scalable data organization strategies. • Contribute to data architecture documentation, standards, and reference designs. Required Skills & Experience • Proven experience designing and implementing enterprise-scale data lake architectures. • Strong expertise in cloud storage design, including structure, governance, and access management. • Hands-on experience with columnar data formats (Parquet, Avro, ORC) and compression techniques. • Deep understanding of partitioning, clustering, and file layout strategies for large datasets. • Experience designing analytics-ready and BI-optimized data models. • Strong knowledge of data lifecycle management and cost optimization strategies. • Ability to translate business and analytics requirements into scalable data architecture solutions. Preferred Qualifications • Experience with Google Cloud Platform (GCS, BigQuery, Dataplex) or similar cloud data ecosystems. • Familiarity with data ingestion and transformation frameworks. • Knowledge of data governance, metadata management, and data quality practices. • Experience supporting data science and advanced analytics use cases. Soft Skills • Strong communication and stakeholder collaboration skills. • Ability to lead architecture discussions and influence technical direction. • Analytical mindset with attention to detail. • Comfortable working in fast-paced, evolving data environments.