

KPG99 INC
Data Engineer (Onsite Interview and W2 Only)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Irving, TX (Hybrid) on a contract basis. Required skills include strong SQL, Python/Scala, ETL development, and experience with Spark, Hadoop, or Databricks. Banking or finance industry experience is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Data Integration #Data Pipeline #Datasets #Scala #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #SQL (Structured Query Language) #Cloud #AWS (Amazon Web Services) #Hadoop #Python #Programming #Data Engineering #Data Architecture #Azure #Databricks
Role description
Job Title: Data Engineer
Location: Irving, TX (Hybrid) (Locals Only)
Type: Contract
MOI: 2nd Onsite
Key Responsibilities:
• Build scalable data pipelines and ETL workflows
• Design and optimize enterprise data architectures
• Work with large datasets across cloud and distributed systems
• Support data integration, transformation, and analytics initiatives
• Collaborate with business and engineering stakeholders
Required Skills:
• Strong SQL and Python/Scala programming skills
• Experience with Spark, Hadoop, or Databricks
• ETL pipeline development experience
• Experience with cloud platforms (AWS/Azure/ GCP)
• Knowledge of data warehousing concepts
• Banking, finance, or enterprise-scale data experience
• Real-time streaming experience with Kafka.
Job Title: Data Engineer
Location: Irving, TX (Hybrid) (Locals Only)
Type: Contract
MOI: 2nd Onsite
Key Responsibilities:
• Build scalable data pipelines and ETL workflows
• Design and optimize enterprise data architectures
• Work with large datasets across cloud and distributed systems
• Support data integration, transformation, and analytics initiatives
• Collaborate with business and engineering stakeholders
Required Skills:
• Strong SQL and Python/Scala programming skills
• Experience with Spark, Hadoop, or Databricks
• ETL pipeline development experience
• Experience with cloud platforms (AWS/Azure/ GCP)
• Knowledge of data warehousing concepts
• Banking, finance, or enterprise-scale data experience
• Real-time streaming experience with Kafka.





