IPrime Info Solutions Inc.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Cincinnati, OH, with a 12+ year experience requirement. It offers a W2 contract, H1B visa status, and requires expertise in Python, SQL, ETL tools, cloud platforms, and big data technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Databases #Data Science #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Airflow #Data Architecture #Security #Synapse #Batch #Compliance #Data Lake #Data Quality #BigQuery #Talend #Azure #Snowflake #NoSQL #Big Data #dbt (data build tool) #Data Management #Metadata #Scala #Data Modeling #Data Governance #Data Engineering #Data Processing #Programming #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Informatica #Redshift #Cloud #Data Integration #Kafka (Apache Kafka) #Data Warehouse #Hadoop #Data Pipeline
Role description
Job Title: Senior Data Engineer Location: Cincinnati, OH Employment Type: W2 Only Visa Status: H1B Only Experience: 12+ Years Job Summary: We are seeking a highly experienced Senior Data Engineer with 12+ years of strong hands-on experience in designing, building, and maintaining scalable data platforms and pipelines. The ideal candidate will have deep expertise in data architecture, ETL/ELT frameworks, cloud data ecosystems, and big data technologies, and will play a key role in enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain robust, scalable, and high-performance data pipelines Build and optimize ETL/ELT processes for structured and unstructured data Architect and manage large-scale data warehouses and data lakes Work closely with data scientists, analysts, and business stakeholders to understand data requirements Ensure data quality, data governance, security, and compliance standards Optimize data processing for performance, scalability, and cost efficiency Lead and mentor junior data engineers and review code for best practices Support real-time and batch data processing solutions Troubleshoot and resolve data pipeline and production issues Required Skills & Qualifications: 12+ years of overall IT experience with strong focus on Data Engineering Strong programming experience in Python, SQL, and/or Scala Hands-on experience with ETL tools (Informatica, Talend, Airflow, dbt, etc.) Expertise in Data Warehousing concepts and tools (Snowflake, Redshift, BigQuery, Synapse) Strong experience with Big Data technologies (Hadoop, Spark, Kafka) Proficiency in Cloud platforms (AWS / Azure / GCP) Experience with relational and NoSQL databases Solid understanding of data modeling, data integration, and metadata management