ShrinQ Consulting Group Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 6-month contract, offering a pay rate of "X" per hour. Required skills include 5+ years in Data Engineering, strong SQL, ETL tools, and programming in Python/Java/Scala. Experience with cloud platforms preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 16, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Synapse #Data Quality #Compliance #GCP (Google Cloud Platform) #Linux #Data Pipeline #Security #Data Security #Documentation #Java #Kafka (Apache Kafka) #Programming #Data Engineering #Data Warehouse #NoSQL #"ETL (Extract #Transform #Load)" #Deployment #Python #Scala #Data Processing #Terraform #BigQuery #SQL (Structured Query Language) #AWS (Amazon Web Services) #GIT #Databases #Spark (Apache Spark) #Azure #Version Control #Computer Science #DevOps #Data Analysis #Redshift #Hadoop #Airflow #Data Lake #Data Modeling #Data Science #Cloud #Snowflake
Role description
Job Summary We are looking for a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure that support analytics, reporting, and data-driven decision-making. The ideal candidate has strong experience with data systems, cloud platforms, and ETL processes, and works closely with data analysts, data scientists, and product teams. Key Responsibilities β€’ Design, develop, and maintain robust data pipelines (ETL/ELT) β€’ Build and optimize data warehouses and data lakes β€’ Ensure data quality, integrity, reliability, and performance β€’ Integrate data from multiple sources (APIs, databases, files, streams) β€’ Collaborate with analytics and business teams to support reporting needs β€’ Monitor, troubleshoot, and optimize data workflows β€’ Implement data security, governance, and compliance best practices β€’ Automate data processing and deployments using CI/CD pipelines β€’ Maintain clear technical documentation Required Skills & Qualifications β€’ Bachelor’s degree in Computer Science, Engineering, or related field β€’ 5+ years of experience in Data Engineering β€’ Strong SQL skills and experience with relational & NoSQL databases β€’ Experience with ETL tools and frameworks β€’ Programming experience in Python / Java / Scala β€’ Knowledge of data modeling concepts (star/snowflake schemas) β€’ Familiarity with Linux and version control (Git) Preferred / Nice to Have β€’ Experience with cloud platforms (AWS / Azure / GCP) β€’ Hands-on experience with Spark, Hadoop, Kafka β€’ Experience with data warehouses (Snowflake, Redshift, BigQuery, Synapse) β€’ Knowledge of orchestration tools (Airflow, Prefect, Dagster) β€’ Understanding of DevOps and Infrastructure-as-Code (Terraform) β€’ Exposure to real-time / streaming data pipelines