Aptino, Inc.

SQL Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a SQL Developer/Data Engineer with a 12-month contract, remote in Redmond, WA. Requires 5+ years in database development, expertise in SQL and PySpark, and experience with Azure Synapse or Databricks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 17, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Washington, United States
-
🧠 - Skills detailed
#Data Analysis #Spark (Apache Spark) #Indexing #Complex Queries #Database Indexing #Databricks #Scala #Computer Science #SQL Queries #Triggers #Data Pipeline #Data Engineering #Synapse #Azure Synapse Analytics #Deployment #PySpark #Big Data #Databases #"ETL (Extract #Transform #Load)" #Data Quality #Azure #Database Performance #Data Modeling #Data Processing #Cloud #SQL (Structured Query Language)
Role description
Role Name: Data Engineer Location: Redmond, WA (Remote – PST hrs) Duration: 12 month We are seeking a skilled Data Engineer with strong hands-on experience in database development and big data processing. The ideal candidate will have expertise in writing complex SQL queries, optimizing database performance, and working with PySpark-based data processing frameworks. The role requires strong reverse engineering skills to understand, enhance, and modernize existing data pipelines and database structures. Key Responsibilities: β€’ Design, develop, and maintain complex SQL queries, stored procedures, functions, triggers, and SQL jobs. β€’ Perform database indexing, query tuning, and performance optimization to ensure high system efficiency. β€’ Develop and manage scalable data pipelines using PySpark. β€’ Work with cloud-based data platforms such as Azure Synapse Analytics, Microsoft Fabric, or Databricks. β€’ Reverse engineer legacy databases, ETL processes, and existing code to improve performance and scalability. β€’ Analyze and optimize data models, schemas, and transformations. β€’ Collaborate with cross-functional teams including Data Analysts, Architects, and Business stakeholders. β€’ Ensure data quality, integrity, and governance standards are maintained. β€’ Support production deployments and troubleshoot data-related issues. Required Qualifications: β€’ Bachelor’s degree in Computer Science, Information Systems, or related field. β€’ 5+ years of experience in database development and data engineering. β€’ Strong hands-on experience with SQL (complex queries, stored procedures, triggers, indexing, performance tuning). β€’ Solid experience with PySpark for large-scale data processing. β€’ Strong understanding of data modeling and ETL processes. β€’ Experience with Azure Synapse, Microsoft Fabric, or Databricks. β€’ Strong reverse engineering and analytical skills.