

Aptino, Inc.
SQL Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a SQL Developer/Data Engineer with a 12-month contract, remote in Redmond, WA. Requires 5+ years in database development, expertise in SQL and PySpark, and experience with Azure Synapse or Databricks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 17, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Washington, United States
-
π§ - Skills detailed
#Data Analysis #Spark (Apache Spark) #Indexing #Complex Queries #Database Indexing #Databricks #Scala #Computer Science #SQL Queries #Triggers #Data Pipeline #Data Engineering #Synapse #Azure Synapse Analytics #Deployment #PySpark #Big Data #Databases #"ETL (Extract #Transform #Load)" #Data Quality #Azure #Database Performance #Data Modeling #Data Processing #Cloud #SQL (Structured Query Language)
Role description
Role Name: Data Engineer
Location: Redmond, WA (Remote β PST hrs)
Duration: 12 month
We are seeking a skilled Data Engineer with strong hands-on experience in database development and big data processing. The ideal candidate will have expertise in writing complex SQL queries, optimizing database performance, and working with PySpark-based data processing frameworks. The role requires strong reverse engineering skills to understand, enhance, and modernize existing data pipelines and database structures.
Key Responsibilities:
β’ Design, develop, and maintain complex SQL queries, stored procedures, functions, triggers, and SQL jobs.
β’ Perform database indexing, query tuning, and performance optimization to ensure high system efficiency.
β’ Develop and manage scalable data pipelines using PySpark.
β’ Work with cloud-based data platforms such as Azure Synapse Analytics, Microsoft Fabric, or Databricks.
β’ Reverse engineer legacy databases, ETL processes, and existing code to improve performance and scalability.
β’ Analyze and optimize data models, schemas, and transformations.
β’ Collaborate with cross-functional teams including Data Analysts, Architects, and Business stakeholders.
β’ Ensure data quality, integrity, and governance standards are maintained.
β’ Support production deployments and troubleshoot data-related issues.
Required Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of experience in database development and data engineering.
β’ Strong hands-on experience with SQL (complex queries, stored procedures, triggers, indexing, performance tuning).
β’ Solid experience with PySpark for large-scale data processing.
β’ Strong understanding of data modeling and ETL processes.
β’ Experience with Azure Synapse, Microsoft Fabric, or Databricks.
β’ Strong reverse engineering and analytical skills.
Role Name: Data Engineer
Location: Redmond, WA (Remote β PST hrs)
Duration: 12 month
We are seeking a skilled Data Engineer with strong hands-on experience in database development and big data processing. The ideal candidate will have expertise in writing complex SQL queries, optimizing database performance, and working with PySpark-based data processing frameworks. The role requires strong reverse engineering skills to understand, enhance, and modernize existing data pipelines and database structures.
Key Responsibilities:
β’ Design, develop, and maintain complex SQL queries, stored procedures, functions, triggers, and SQL jobs.
β’ Perform database indexing, query tuning, and performance optimization to ensure high system efficiency.
β’ Develop and manage scalable data pipelines using PySpark.
β’ Work with cloud-based data platforms such as Azure Synapse Analytics, Microsoft Fabric, or Databricks.
β’ Reverse engineer legacy databases, ETL processes, and existing code to improve performance and scalability.
β’ Analyze and optimize data models, schemas, and transformations.
β’ Collaborate with cross-functional teams including Data Analysts, Architects, and Business stakeholders.
β’ Ensure data quality, integrity, and governance standards are maintained.
β’ Support production deployments and troubleshoot data-related issues.
Required Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of experience in database development and data engineering.
β’ Strong hands-on experience with SQL (complex queries, stored procedures, triggers, indexing, performance tuning).
β’ Solid experience with PySpark for large-scale data processing.
β’ Strong understanding of data modeling and ETL processes.
β’ Experience with Azure Synapse, Microsoft Fabric, or Databricks.
β’ Strong reverse engineering and analytical skills.






