

IPrime Info Solutions Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Phoenix, AZ, with a 12+ year experience requirement, focusing on data engineering and architecture. Pay is W2 only, requiring strong expertise in SQL, Python, cloud platforms (AWS/Azure/GCP), and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Data Warehouse #PySpark #Data Migration #"ETL (Extract #Transform #Load)" #Migration #Redshift #Compliance #Cloud #Hadoop #ADF (Azure Data Factory) #GCP (Google Cloud Platform) #Big Data #Airflow #DevOps #S3 (Amazon Simple Storage Service) #Dataflow #Java #Security #Strategy #Spark (Apache Spark) #Data Security #SQL (Structured Query Language) #Synapse #BigQuery #Python #Data Pipeline #Data Science #AWS (Amazon Web Services) #Data Strategy #Leadership #Data Architecture #Data Governance #Scala #Snowflake #Data Lake #Data Quality #Data Engineering #Databricks #Data Processing #BI (Business Intelligence) #Azure #Kafka (Apache Kafka)
Role description
Job Title: Senior Data Engineer
Location: Phoenix, AZ
Visa Type: H1B Only
Employment Type: W2 Only
Experience: 12+ Years
Job Summary:
We are seeking a highly experienced Senior Data Engineer with 12+ years of overall IT experience and strong expertise in data engineering, data architecture, and enterprise data solutions. The ideal candidate will have hands-on experience designing and implementing scalable data pipelines, data lakes, and modern cloud-based data platforms. This role requires strong technical leadership, stakeholder collaboration, and the ability to drive data strategy initiatives across the organization.
Key Responsibilities:
Design, develop, and maintain scalable ETL/ELT pipelines using modern data engineering tools.
Build and optimize data warehouses, data lakes, and lakehouse architectures.
Develop high-performance data processing solutions using Python, SQL, Spark, or Scala.
Work with structured and unstructured data from multiple enterprise sources.
Implement data quality, data governance, and data security best practices.
Collaborate with Data Scientists, BI Developers, and Business Teams to deliver reliable data solutions.
Optimize data models and improve query performance.
Lead data migration and modernization initiatives to cloud platforms (AWS/Azure/GCP).
Mentor junior data engineers and provide technical guidance.
Ensure compliance with enterprise data standards and best practices.
Required Skills & Qualifications:
12+ years of IT experience with at least 8+ years in Data Engineering.
Strong expertise in SQL and advanced query optimization.
Hands-on experience with Python, PySpark, Scala, or Java.
Extensive experience with Big Data technologies (Hadoop, Spark, Hive).
Experience with cloud platforms such as AWS (Redshift, S3, Glue), Azure (ADF, Synapse, Databricks), or GCP (BigQuery, Dataflow).
Strong knowledge of data warehousing concepts (Star Schema, Snowflake Schema).
Experience with orchestration tools like Airflow.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with Kafka or other streaming technologies is a plus.
Strong understanding of data governance and data security frameworks.
Job Title: Senior Data Engineer
Location: Phoenix, AZ
Visa Type: H1B Only
Employment Type: W2 Only
Experience: 12+ Years
Job Summary:
We are seeking a highly experienced Senior Data Engineer with 12+ years of overall IT experience and strong expertise in data engineering, data architecture, and enterprise data solutions. The ideal candidate will have hands-on experience designing and implementing scalable data pipelines, data lakes, and modern cloud-based data platforms. This role requires strong technical leadership, stakeholder collaboration, and the ability to drive data strategy initiatives across the organization.
Key Responsibilities:
Design, develop, and maintain scalable ETL/ELT pipelines using modern data engineering tools.
Build and optimize data warehouses, data lakes, and lakehouse architectures.
Develop high-performance data processing solutions using Python, SQL, Spark, or Scala.
Work with structured and unstructured data from multiple enterprise sources.
Implement data quality, data governance, and data security best practices.
Collaborate with Data Scientists, BI Developers, and Business Teams to deliver reliable data solutions.
Optimize data models and improve query performance.
Lead data migration and modernization initiatives to cloud platforms (AWS/Azure/GCP).
Mentor junior data engineers and provide technical guidance.
Ensure compliance with enterprise data standards and best practices.
Required Skills & Qualifications:
12+ years of IT experience with at least 8+ years in Data Engineering.
Strong expertise in SQL and advanced query optimization.
Hands-on experience with Python, PySpark, Scala, or Java.
Extensive experience with Big Data technologies (Hadoop, Spark, Hive).
Experience with cloud platforms such as AWS (Redshift, S3, Glue), Azure (ADF, Synapse, Databricks), or GCP (BigQuery, Dataflow).
Strong knowledge of data warehousing concepts (Star Schema, Snowflake Schema).
Experience with orchestration tools like Airflow.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with Kafka or other streaming technologies is a plus.
Strong understanding of data governance and data security frameworks.






