Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Dallas, TX, offering a long-term contract. Requires 10+ years of IT experience, expertise in SQL, Python, AWS, and data migration (Teradata to Redshift preferred). On-site presence is mandatory.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Informatica #Cloud #Spark (Apache Spark) #Hadoop #Unix #Alteryx #REST API #Regression #EC2 #SSIS (SQL Server Integration Services) #Migration #Teradata SQL #Automation #Data Migration #Scripting #ML (Machine Learning) #Storage #Data Architecture #HiveQL #Java #SQL Server #Linux #MongoDB #R #Agile #SageMaker #DevOps #GitLab #AWS (Amazon Web Services) #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Docker #Data Quality #Data Engineering #Pig #Zookeeper #REST (Representational State Transfer) #Strategy #Jenkins #Data Modeling #HDFS (Hadoop Distributed File System) #Deployment #JSON (JavaScript Object Notation) #Oracle #Athena #Data Integration #System Testing #Data Science #Python #Scala #"ETL (Extract #Transform #Load)" #SQL Queries #Redis #DataStage #Data Lake #Programming #Data Warehouse #Teradata #Integration Testing #API (Application Programming Interface) #Redshift #Sybase
Role description
JOB DESCRIPTION: TITLE: DATA QUALITY ENGINEER Location: Dallas, TX (NEED ONLY LOCALS – ONE ROUND OF FACE TO FACE AVAILABILITY IS MUST) Duration: Long term Interview: Mode One Teams call, One Face to Face interview MUST HAVE 10+ Years experience over all IT experience…! Data Engineer Responsibilities: (SDET) β€’ Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. β€’ Interact with key business functions to confirm data quality policies and governed attributes. β€’ Follow quality management best practices and processes to bring consistency and completeness to integration service testing β€’ Designing and managing the testing AWS environments of data workflows during development and deployment of data products β€’ Provide assistance to the team in Test Estimation & Test Planning β€’ Design, development of Reports and dashboards. β€’ Analyzing and evaluating data sources, data volume, and business rules. β€’ Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. β€’ No SQL data and unstructured data experience. β€’ Extensive experience in programming tools like Map Reduce to HIVEQL β€’ Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. β€’ Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. β€’ Interpret and analyses data from various source systems to support data integration and data reporting needs. β€’ Experience in testing Database Application to validate source to destination data movement and transformation. β€’ Work with team leads to prioritize business and information needs. β€’ Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. β€’ Develop and summarize Data Quality analysis and dashboards. β€’ Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. β€’ Execute testing of data analytic and data integration on time and within budget. β€’ Work with team leads to prioritize business and information needs β€’ Troubleshoot & determine best resolution for data issues and anomalies β€’ Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. β€’ Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Requirements: β€’ Extensive Experience in Data migration is a must ( Teradata to Redshift preferred) β€’ Extensive testing Experience with SQL/Unix/Linux scripting is a must β€’ Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu) β€’ Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. β€’ Extensive experience using Python scripting and AWS and Cloud Technologies. β€’ Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies β€’ Experienced in large-scale application development testing – Cloud/ On Prem Data warehouse, Data Lake, Data science β€’ Experience with multi-year, large-scale projects β€’ Expert technical skills with hands-on testing experience using SQL queries. β€’ Extensive experience with both data migration and data transformation testing β€’ Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. β€’ Extensive testing Experience with SQL/Unix/Linux. β€’ Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu) β€’ Extensive experience using Python scripting and AWS and Cloud Technologies. β€’ Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies β€’ API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen β€’ Java/Java Script - Implement core Java, Integration, Core Java and API. β€’ Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. β€’ AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. β€’ API/Rest API - Rest API and Micro Services using JSON, SoapUI β€’ Extensive experience in DevOps/Data Ops space. β€’ Strong experience in working with DevOps and build pipelines. β€’ Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/ Spark, Sage Maker etc… β€’ Experience with technologies like Kubeflow, EKS, Docker β€’ Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. β€’ Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. β€’ Experience using Jenkins and Gitlab β€’ Experience using both Waterfall and Agile methodologies. β€’ Experience in testing storage tools like S3, HDFS β€’ Experience with one or more industry-standard defect or Test Case management Tools β€’ Great communication skills (regularly interacts with cross functional team members)