

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Irving, TX, for 12+ years with a pay rate of "unknown." Requires 4-5 years of Cloudera and Airflow experience, expertise in PySpark, and knowledge of ServiceNow. On-site work is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
624
-
ποΈ - Date discovered
September 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Irving, TX
-
π§ - Skills detailed
#Python #Cloud #Quality Assurance #Data Pipeline #Data Ingestion #Documentation #Data Lake #Jira #Airflow #Data Engineering #Spark (Apache Spark) #Data Governance #"ETL (Extract #Transform #Load)" #PySpark #Security #Data Accuracy #Data Quality #Cloudera #Data Security #IAM (Identity and Access Management) #GCP (Google Cloud Platform) #Compliance
Role description
Location: Irving TX
Exp: 12+ Years
Top 3 things required:
Cloudera 4-5 years
Data Cloning
Airflow 4-5 years
One of our top financial clients is seeking a highly skilled Data Engineer to join their data delivery team. This person will play a crucial part in expanding the client's data lake to accommodate new applications and data sources, including IAM, ServiceNow, among others. He/She/They will be ensuring the security, accessibility, and integrity of this critical data infrastructure.
Day-to-Day Responsibilities:
Data Ingestion and Transformation: Develop and implement robust data ingestion pipelines to extract, transform, and load data from various sources into the data lake.
Data Security and Access Control: Design and implement robust security measures to protect sensitive data, including access controls, encryption, and data masking.
Data Governance and Compliance: Collaborate with data governance teams to establish and enforce data standards, policies, and procedures.
Data Quality Assurance: Monitor data quality and implement data validation and cleansing processes to ensure data accuracy and consistency.
Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and queries.
Collaboration with Stakeholders: Work closely with analysts, developers, and business users to understand their data needs and translate them into technical requirements.
Documentation: Maintain clear and concise documentation of data pipelines, processes, and best practices
Must-Have Skills:
7-10 years of experience in data engineering
Experience in Cloudera
Expertise in PySpark
Experience in Python or other relevant languages
In-depth knowledge of ServiceNow
Experience with JIRA and Confluence
Nice-to-Have Skills:
GCP experience
Location: Irving TX
Exp: 12+ Years
Top 3 things required:
Cloudera 4-5 years
Data Cloning
Airflow 4-5 years
One of our top financial clients is seeking a highly skilled Data Engineer to join their data delivery team. This person will play a crucial part in expanding the client's data lake to accommodate new applications and data sources, including IAM, ServiceNow, among others. He/She/They will be ensuring the security, accessibility, and integrity of this critical data infrastructure.
Day-to-Day Responsibilities:
Data Ingestion and Transformation: Develop and implement robust data ingestion pipelines to extract, transform, and load data from various sources into the data lake.
Data Security and Access Control: Design and implement robust security measures to protect sensitive data, including access controls, encryption, and data masking.
Data Governance and Compliance: Collaborate with data governance teams to establish and enforce data standards, policies, and procedures.
Data Quality Assurance: Monitor data quality and implement data validation and cleansing processes to ensure data accuracy and consistency.
Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and queries.
Collaboration with Stakeholders: Work closely with analysts, developers, and business users to understand their data needs and translate them into technical requirements.
Documentation: Maintain clear and concise documentation of data pipelines, processes, and best practices
Must-Have Skills:
7-10 years of experience in data engineering
Experience in Cloudera
Expertise in PySpark
Experience in Python or other relevant languages
In-depth knowledge of ServiceNow
Experience with JIRA and Confluence
Nice-to-Have Skills:
GCP experience