

Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Staff Database Engineer in Austin, TX, lasting 5 months with a pay rate of $70 to $90/hr. Requires 7–10+ years in data engineering, expertise in SQL, SSIS, ADF, and cloud platforms (Azure, AWS, GCP).
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date discovered
August 1, 2025
🕒 - Project duration
3 to 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
1099 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Data Lake #REST (Representational State Transfer) #SSIS (SQL Server Integration Services) #SQL (Structured Query Language) #Deployment #ADF (Azure Data Factory) #Java #Data Architecture #Data Processing #Logging #Compliance #Terraform #Data Science #Kafka (Apache Kafka) #.Net #AWS (Amazon Web Services) #Data Design #Python #Batch #Data Engineering #Scala #REST API #SNS (Simple Notification Service) #Azure #Big Data #"ETL (Extract #Transform #Load)" #Computer Science #Kubernetes #GCP (Google Cloud Platform) #Programming #Database Systems #Monitoring #Cloud #Debugging #Documentation
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job title:- Staff Database Engineer
Location: Austin, TX
Duration:- 5 Months
Pay rate: $70 to 90/hr
Description:
Overview
We are seeking an experienced Staff Database Engineer (contractor) to design, build, and optimize complex data systems. This senior-level contractor will work across multiple domains including data architecture, pipeline development, and system operations.
Key Responsibilities
Design and implement scalable and reliable data architectures that support large-scale data processing, transformation, and analysis.
Develop, maintain, and optimize ETL/ELT pipelines using modern tools and frameworks to move and transform data from diverse sources (flat files, streaming systems, REST APIs, EHRs, etc.).
Build and support high-performance, cloud-based systems for real-time and batch processing (e.g., data lakes, warehouses, and mesh architectures).
Collaborate with stakeholders across engineering, data science, and product teams to gather requirements and deliver actionable data solutions.
Interface with Electronic Health Records (EHR) and healthcare data formats to ensure integration accuracy and compliance.
Own operational excellence for data systems including logging, monitoring, alerting, and incident response.
Utilize advanced programming skills (.NET, Java, or similar) and SQL to engineer robust data services.
Contribute to architecture frameworks and documentation to guide team standards and best practices.
Act as a subject matter expert (SME), mentoring junior engineers and promoting engineering excellence across the organization.
Qualifications
7–10+ years of professional experience in data engineering, software development, or database systems.
Proven experience with SSIS and ADF.
•
• Bachelor's degree in Computer Science, Engineering, or related field—or equivalent experience.
Expertise in SQL, database systems, and modern data processing tools and frameworks.
Strong proficiency in at least one programming language (.NET, Java, Python, etc.).
Demonstrated experience with modern cloud platforms (Azure, AWS, or GCP).
Familiarity with data streaming and queuing technologies (Kafka, SNS, RabbitMQ, etc.).
Understanding of CI/CD pipelines, infrastructure-as-code (Terraform), and containerized deployments (e.g., Kubernetes).
Comfortable with production system support, debugging, and performance optimization.
Strong problem-solving, communication, and collaboration skills.
High-level understanding of big data design patterns and architectural principles (e.g., data lake vs. warehouse vs. mesh).
Experience with RESTful APIs and integrating external data sources into internal systems.