

Principal Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Principal Data Engineer in Houston, Texas, for 6 months at $80-$85/hour. Requires 10+ years in data engineering, expertise in AWS/Azure, Python, SQL, and experience with data pipelines, ETL/ELT, and data modeling.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date discovered
August 7, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
1099 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Automation #Lambda (AWS Lambda) #GCP (Google Cloud Platform) #REST API #Spark (Apache Spark) #SQL (Structured Query Language) #SQS (Simple Queue Service) #Data Modeling #Shell Scripting #API (Application Programming Interface) #Cloud #GitHub #Batch #S3 (Amazon Simple Storage Service) #dbt (data build tool) #Metadata #Data Management #Data Integration #Python #AWS (Amazon Web Services) #SNS (Simple Notification Service) #Physical Data Model #Data Catalog #RDS (Amazon Relational Database Service) #Data Pipeline #Data Engineering #AI (Artificial Intelligence) #Azure #Data Conversion #"ETL (Extract #Transform #Load)" #Programming #Data Processing #Data Extraction #Snowflake #Scripting #Computer Science #REST (Representational State Transfer) #Data Transformations
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Details:
Role: Principal Data Engineer
Location: HOUSTON Texas 77002 , 4X Onsite - Mon-Thurs
Duration: 6 months (potential to extension or convert)
Pay rate: $80-$85.00/hour
Start date: End of July/Beginning of August - Tentative
IV Process: 2 rounds
The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time , paid sick and safe time , hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable)
Job Description:
• Translate business requirements into technical specifications; establish and define details, definitions, and requirements of applications, components, and enhancements.
• Participate in project planning; identifying milestones, deliverables, and resource requirements; track activities and task execution.
• Generate design, development, test plans, detailed functional specifications documents, user interface design, and process flow charts for execution of programming.
• Develop data pipelines / APIs using Python, SQL, AWS, Azure Fabric.
• Use an analytical, data-driven approach to drive a deep understanding of fast-changing business.
Responsibilities:
• Translate business requirements into technical specifications; establish and define details, definitions, and requirements of applications, components, and enhancements.
• Participate in project planning; identifying milestones, deliverables, and resource requirements; track activities and task execution.
• Generate design, development, test plans, detailed functional specifications documents, user interface design, and process flow charts for execution of programming.
• Develop data pipelines / APIs using Python, SQL, potentially Spark, and AWS, Azure, or GCP Methods.
• Use an analytical, data-driven approach to drive a deep understanding of fast-changing business.
• Build large-scale batch and real-time data pipelines with data processing frameworks in AWS, Snowflake, and DBT.
• Move data from on-prem to cloud and cloud data conversions.
• Implement large-scale data ecosystems including data management, governance, and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms.
• Leverage automation, cognitive, and science-based techniques to manage data, predict scenarios, and prescribe actions.
• Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise, and providing As-a-Service offerings for continuous insights and improvements.
Qualifications:
• 10+ years of experience in data engineering with an emphasis on data analytics and reporting.
• 6+ years of experience with the cloud platforms AWS, Azure.
• 2+ years of experience in building data pipelines to support AI and models, strong understanding of LLM's, prompt engineering, vibe coding using the GitHub copilot or similar AI tools, helping team to build faster using the enterprise approved AI tools.
• 10+ years of experience in SQL, data transformations, ETL/ELT and experience on Database Platform Snowflake ,Fabric and S3.
• 10+ years of experience in the design and build of data extraction, transformation, and loading processes by writing custom data pipelines.
• 6+ years of experience with one or more of the following scripting languages: Python, SQL, Shell Scripting.
• 6+ years of experience designing and building real-time data pipelines utilizing various Cloud services such as S3, Kinesis, RDS, Snowflake, Lambda, Glue, API Gateway, SQS, SNS, CloudWatch, cloud formation, DBT, etc.
• 5+ years of experience in REST API data integrations in pulling into Snowflake, posting from Snowflake, understanding of webhooks, and implementing using AWS gateway.
• Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline, or equivalent experience.
• 4+ years of experience in data modeling to support descriptive analytics in PBI.
• 4+ years of experience in data cataloging, empowering business, and digital partners to utilize data cataloging and metadata for self-service.
• Strong understanding of data management principles and best practices.
• understanding of data modeling, including conceptual, logical, and physical data models