

Jobs via Dice
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a long-term contract, located in Texas, Arizona, or North Carolina. Key skills include data pipeline development, data warehousing, and expertise in AWS services. Certifications in relevant technologies are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 18, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas City, TX
-
🧠 - Skills detailed
#GitHub #Logging #SnowPipe #Data Security #Monitoring #Observability #SQL (Structured Query Language) #Metadata #SQL Server #Data Integrity #Data Governance #Disaster Recovery #Lambda (AWS Lambda) #Scala #Batch #Data Integration #Data Modeling #Data Warehouse #"ETL (Extract #Transform #Load)" #GitLab #SQS (Simple Queue Service) #Airflow #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Data Access #Qlik #Data Storage #Data Orchestration #RDS (Amazon Relational Database Service) #Cloud #JSON (JavaScript Object Notation) #Automation #Data Quality #Agile #Snowflake #Data Science #Data Engineering #Security #SNS (Simple Notification Service) #Database Management #Data Pipeline #PySpark #XML (eXtensible Markup Language) #Storage #Documentation #Compliance #Spark (Apache Spark) #Data Processing #Oracle #Deployment #dbt (data build tool) #Python
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Engineering Square, is seeking the following. Apply via Dice today!
Title: Senior Data Engineer
Location: Texas / Arizona / North Carolina
Mode of work: hybrid model, 3 days office, 2 days WFH.
Duration: Long term contract
Note: Candidates who do not require Visa sponsorship are encouraged to apply
Job Description:
The position is responsible for designing, developing, and maintaining data pipelines and infrastructure. This role implements data integration, processing, and analytics solutions in an agile environment while collaborating with cross-functional teams to deliver efficient and scalable data systems.
Responsibilities
Data Pipelines - Design, develop, and maintain data pipelines for efficient data processing and integration for real-time and batch-processing. Implement and optimize ETL processes to extract, load, transform, and integrate data from various sources. Enhance data flows and storage solutions for improved performance
Data Warehousing - Design and implement data warehouse structures. Ensure data quality and consistency within the data warehouse. Apply data modeling techniques for efficient data storage and retrieval
Data Governance, Quality & Compliance - Implement data governance policies and procedures. Implement data quality frameworks, standards and documentation. Ensure compliance with relevant data regulations and standards
Security & Access Management - Implement data security measures and access controls. Maintain data protection protocols
Performance Optimization and Troubleshooting - Analyze and optimize system performance for large-scale data operations. Troubleshoot data issues and implement robust solutions
Testing & Automation Write unit test cases, validate the data integrity & consistency requirements, adopt automated data pipelines using GitLab, Github, CICD tools.
Code Deployment & Release Management - Adopt release management processes to promote code deployment to various environments including production, disaster recovery, and support activities.
Cross-Functional Collaboration - Collaborate with data scientists, analysts, and other teams to understand and meet data requirements. Participate in cross-functional projects to support data-driven initiatives
Qualifications:
Bachelor's Degree and 2 years of experience in Data engineering, database management, or related field OR High School Diploma or GED and 6 years of experience in Data engineering, database management, or related field.
Preferred Experience/Technical/Business Skills:
• Hands-on experience in building robust metadata-driven, automated data pipeline solutions leveraging modern cloud-based data technologies, tools for large data platforms.
• Hands-on experience leveraging data security, governance methodologies meeting data compliance requirements.
• Experience building automated ELT data pipelines, snowpipe frameworks leveraging Qlik Replicate, DBT Cloud, snowflake with CICD.
• Hands-on experience building data integrity solutions across multiple data sources and targets like SQL Server, Oracle, Mainframe-DB2, files, and Snowflake.
• Experience working with various structured & semi-structured data files - CSV, fixedwidth, JSON, XML, Excel, and mainframe VSAM.
• Experience using S3, Lambda, SQS, SNS, Glue, and RDS AWS services.
• Proficiency in Python, PySpark, and advanced SQL for ingestion frameworks and automation.
• Hands-on data orchestration experience using DBT Cloud, Astronomer Airflow.
• Experience in implementing logging, monitoring, alerting, observability, and performance tuning techniques.
• Implement and maintain sensitive data protection strategies tokenization, snowflake data masking policies, dynamic & conditional masking, and rolebased masking rules.
• Strong experience designing, implementing RBAC, data access controls, adopting governance standards across Snowflake, and supporting systems.
• Strong experience in adopting release management guidelines, code deployment to various environments, implementing disaster recovery strategies, and leading production activities.
• Experience implementing schema drift detection and schema evolution patterns.
• Must have one or more certifications in the relevant technology fields.
• Nice to have financial banking experience.
Functional Skills:
• Team Player: Support peers, team, and department management.
• Communication: Excellent verbal, written, and interpersonal communication skills.
• Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.
• Partnership and Collaboration: Develop and maintain partnerships with business and IT stakeholders
• Attention to Detail: Ensure accuracy and thoroughness in all tasks
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Engineering Square, is seeking the following. Apply via Dice today!
Title: Senior Data Engineer
Location: Texas / Arizona / North Carolina
Mode of work: hybrid model, 3 days office, 2 days WFH.
Duration: Long term contract
Note: Candidates who do not require Visa sponsorship are encouraged to apply
Job Description:
The position is responsible for designing, developing, and maintaining data pipelines and infrastructure. This role implements data integration, processing, and analytics solutions in an agile environment while collaborating with cross-functional teams to deliver efficient and scalable data systems.
Responsibilities
Data Pipelines - Design, develop, and maintain data pipelines for efficient data processing and integration for real-time and batch-processing. Implement and optimize ETL processes to extract, load, transform, and integrate data from various sources. Enhance data flows and storage solutions for improved performance
Data Warehousing - Design and implement data warehouse structures. Ensure data quality and consistency within the data warehouse. Apply data modeling techniques for efficient data storage and retrieval
Data Governance, Quality & Compliance - Implement data governance policies and procedures. Implement data quality frameworks, standards and documentation. Ensure compliance with relevant data regulations and standards
Security & Access Management - Implement data security measures and access controls. Maintain data protection protocols
Performance Optimization and Troubleshooting - Analyze and optimize system performance for large-scale data operations. Troubleshoot data issues and implement robust solutions
Testing & Automation Write unit test cases, validate the data integrity & consistency requirements, adopt automated data pipelines using GitLab, Github, CICD tools.
Code Deployment & Release Management - Adopt release management processes to promote code deployment to various environments including production, disaster recovery, and support activities.
Cross-Functional Collaboration - Collaborate with data scientists, analysts, and other teams to understand and meet data requirements. Participate in cross-functional projects to support data-driven initiatives
Qualifications:
Bachelor's Degree and 2 years of experience in Data engineering, database management, or related field OR High School Diploma or GED and 6 years of experience in Data engineering, database management, or related field.
Preferred Experience/Technical/Business Skills:
• Hands-on experience in building robust metadata-driven, automated data pipeline solutions leveraging modern cloud-based data technologies, tools for large data platforms.
• Hands-on experience leveraging data security, governance methodologies meeting data compliance requirements.
• Experience building automated ELT data pipelines, snowpipe frameworks leveraging Qlik Replicate, DBT Cloud, snowflake with CICD.
• Hands-on experience building data integrity solutions across multiple data sources and targets like SQL Server, Oracle, Mainframe-DB2, files, and Snowflake.
• Experience working with various structured & semi-structured data files - CSV, fixedwidth, JSON, XML, Excel, and mainframe VSAM.
• Experience using S3, Lambda, SQS, SNS, Glue, and RDS AWS services.
• Proficiency in Python, PySpark, and advanced SQL for ingestion frameworks and automation.
• Hands-on data orchestration experience using DBT Cloud, Astronomer Airflow.
• Experience in implementing logging, monitoring, alerting, observability, and performance tuning techniques.
• Implement and maintain sensitive data protection strategies tokenization, snowflake data masking policies, dynamic & conditional masking, and rolebased masking rules.
• Strong experience designing, implementing RBAC, data access controls, adopting governance standards across Snowflake, and supporting systems.
• Strong experience in adopting release management guidelines, code deployment to various environments, implementing disaster recovery strategies, and leading production activities.
• Experience implementing schema drift detection and schema evolution patterns.
• Must have one or more certifications in the relevant technology fields.
• Nice to have financial banking experience.
Functional Skills:
• Team Player: Support peers, team, and department management.
• Communication: Excellent verbal, written, and interpersonal communication skills.
• Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.
• Partnership and Collaboration: Develop and maintain partnerships with business and IT stakeholders
• Attention to Detail: Ensure accuracy and thoroughness in all tasks





