Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Houston, TX, on a contract basis, requiring 12+ years of experience. Key skills include SQL, Python, big data technologies, and cloud platforms. A degree in Computer Science or related field is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 24, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Programming #Airflow #"ETL (Extract #Transform #Load)" #DevOps #Cloud #Big Data #Data Lake #Data Pipeline #Redshift #Kafka (Apache Kafka) #Python #Snowflake #GIT #Computer Science #SQL (Structured Query Language) #Data Engineering #Azure #Data Quality #Data Science #Version Control #Spark (Apache Spark) #Luigi #Data Modeling #Hadoop #Security #Datasets #Data Warehouse #Databricks #Java #Scala #AWS (Amazon Web Services) #BigQuery #GCP (Google Cloud Platform)
Role description
• Job Title- Senior Data Engineer • Location- Houston, TX (On Site) • Employment Type- Contract • Experience- 12+ years Job summary: We are looking for an experienced Senior Data Engineer with 12+ years of experience to design, develop, and optimize large-scale data pipelines and solutions. The ideal candidate will have extensive experience in building robust data systems, mentoring junior engineers, and collaborating with cross-functional teams to enable data-driven decision-making. Key Responsibilities • Design, build, and maintain scalable, efficient, and secure data pipelines and ETL processes. • Architect, implement, and optimize data lake and data warehouse solutions. • Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets and reporting solutions. • Ensure data quality, reliability, and security across systems. • Evaluate and implement new tools, technologies, and best practices for data engineering. • Troubleshoot and resolve issues related to data workflows, performance, and availability. • Mentor and guide junior data engineers, promoting a culture of excellence and continuous improvement. • Document data flows, system architectures, and operational procedures. Required Qualifications • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. • 12+ years of experience in data engineering or related roles. • Strong proficiency in SQL and programming languages such as Python, Java, or Scala. • Hands-on experience with big data technologies (e.g., Hadoop, Spark, Hive, Kafka). • Expertise with cloud platforms (AWS, Azure, GCP) and their data services (e.g., Redshift, BigQuery, Snowflake, Databricks). • Strong understanding of ETL/ELT frameworks, data modeling, and data warehousing concepts. • Knowledge of workflow orchestration tools (e.g., Airflow, Luigi). • Familiarity with DevOps practices, CI/CD pipelines, and version control (Git). • Excellent problem-solving, analytical, and communication skills.