

Net2Source Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Phoenix, AZ, on a contract basis. Key skills include SQL, Python, data ingestion, and experience with tools like Apache Airflow and Spark. Financial services background preferred; F2F interview required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
January 29, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Data Security #GIT #Data Extraction #Oracle #SQL (Structured Query Language) #Agile #Data Management #PostgreSQL #Scripting #Cloud #Data Governance #Data Integration #Data Engineering #Data Quality #Python #Airflow #Security #Automation #Apache Airflow #Data Pipeline #Batch #Schema Design #SQL Queries #Data Processing #GCP (Google Cloud Platform) #Data Modeling #Data Science #Data Cleansing #"ETL (Extract #Transform #Load)" #Shell Scripting #Datasets #Databases #Metadata #Scala #Spark (Apache Spark) #Scrum #Data Accuracy #Data Integrity #MySQL #Data Ingestion #Version Control #Data Profiling
Role description
Role: Data engineer
Location: Phoenix AZ // Need only locals and F2F interview required
Term: Contract
Looking for a highly motivated Data Engineer to join our dynamic data and analytics team. In this role, you will be responsible for building and maintaining scalable data pipelines, supporting ingestion from multiple sources, and ensuring data integrity and availability across various systems. You'll work closely with data scientists, analysts, and engineering teams to enable real-time and batch data processing.
Key Responsibilities
• Design, develop, and maintain robust data pipelines to ingest, transform, and deliver data across internal and external platforms.
• Write and optimize complex SQL queries for data extraction, transformation, and loading (ETL/ELT).
• Implement data ingestion frameworks using batch and streaming technologies.
• Develop data integration workflows and scripts using Python, Shell scripting, or other scripting languages.
• Ensure high performance, reliability, and data quality across all stages of the pipeline.
• Collaborate with cross-functional teams (data science, analytics, product) to understand data needs and deliver scalable solutions.
• Monitor data jobs, identify bottlenecks, and troubleshoot issues in real time.
• Handle large and intricate datasets, perform data profiling, and ensure conformance to data quality standards.
• Apply problem-solving skills to identify root causes of data issues and suggest long-term fixes or enhancements. Work in Agile/Scrum environments, participating in planning, reviews, and delivery cycles.
Required Skills & Experience
• Strong hands-on experience with SQL (writing complex joins, window functions, CTEs, aggregations, etc.).
• Proven experience with data ingestion, integration, and pipeline design across multiple data sources.
• Proficiency in Python, Shell, or other scripting languages for automation and orchestration tasks.
• Familiarity with data processing tools and frameworks such as Apache Airflow, Spark, Kafka, or similar.
• Experience working with relational databases (PostgreSQL, Oracle, MySQL).
• Experience with Cloud data platforms (GCP Big Query) is a Plus.
• Ability to work with complex and messy data: cleansing, validating, and transforming to ensure consistency.
• Strong analytical and problem-solving skills with attention to detail and data accuracy.
• Exposure to CI/CD practices and version control (GIT).
• Knowledge of data modeling principles and schema design is a plus.
Plus Qualifications
• Experience in handling data from APls, flat files, and event streams.
• Background in financial services, payments, or customer analytics .
• Familiarity with data governance practices, metadata management, and PIl data handling.
• • Understanding of data security, encryption, and masking techniques.
Role: Data engineer
Location: Phoenix AZ // Need only locals and F2F interview required
Term: Contract
Looking for a highly motivated Data Engineer to join our dynamic data and analytics team. In this role, you will be responsible for building and maintaining scalable data pipelines, supporting ingestion from multiple sources, and ensuring data integrity and availability across various systems. You'll work closely with data scientists, analysts, and engineering teams to enable real-time and batch data processing.
Key Responsibilities
• Design, develop, and maintain robust data pipelines to ingest, transform, and deliver data across internal and external platforms.
• Write and optimize complex SQL queries for data extraction, transformation, and loading (ETL/ELT).
• Implement data ingestion frameworks using batch and streaming technologies.
• Develop data integration workflows and scripts using Python, Shell scripting, or other scripting languages.
• Ensure high performance, reliability, and data quality across all stages of the pipeline.
• Collaborate with cross-functional teams (data science, analytics, product) to understand data needs and deliver scalable solutions.
• Monitor data jobs, identify bottlenecks, and troubleshoot issues in real time.
• Handle large and intricate datasets, perform data profiling, and ensure conformance to data quality standards.
• Apply problem-solving skills to identify root causes of data issues and suggest long-term fixes or enhancements. Work in Agile/Scrum environments, participating in planning, reviews, and delivery cycles.
Required Skills & Experience
• Strong hands-on experience with SQL (writing complex joins, window functions, CTEs, aggregations, etc.).
• Proven experience with data ingestion, integration, and pipeline design across multiple data sources.
• Proficiency in Python, Shell, or other scripting languages for automation and orchestration tasks.
• Familiarity with data processing tools and frameworks such as Apache Airflow, Spark, Kafka, or similar.
• Experience working with relational databases (PostgreSQL, Oracle, MySQL).
• Experience with Cloud data platforms (GCP Big Query) is a Plus.
• Ability to work with complex and messy data: cleansing, validating, and transforming to ensure consistency.
• Strong analytical and problem-solving skills with attention to detail and data accuracy.
• Exposure to CI/CD practices and version control (GIT).
• Knowledge of data modeling principles and schema design is a plus.
Plus Qualifications
• Experience in handling data from APls, flat files, and event streams.
• Background in financial services, payments, or customer analytics .
• Familiarity with data governance practices, metadata management, and PIl data handling.
• • Understanding of data security, encryption, and masking techniques.






