Paul Murphy Associates

Data/SQL Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data/SQL Developer with a contract length of more than 6 months, offering a hybrid work location. Key skills include SQL Server, Azure Databricks, and ETL/ELT pipeline development. A Bachelor’s degree and 5–6+ years of experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 17, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Azure Data Factory #Storage #SQL Queries #Databricks #Logging #PySpark #SSIS (SQL Server Integration Services) #Security #Monitoring #Data Engineering #Data Modeling #Automation #Azure #Synapse #Database Design #ADLS (Azure Data Lake Storage) #Scala #Compliance #SQL (Structured Query Language) #DevOps #Computer Science #Version Control #Data Quality #Data Science #Visualization #Microsoft Power BI #Azure DevOps #Azure ADLS (Azure Data Lake Storage) #BI (Business Intelligence) #Spark (Apache Spark) #Azure Databricks #Data Pipeline #Delta Lake #Python #Azure SQL #SQL Server #ADF (Azure Data Factory) #GIT #Cloud #Deployment #Azure Synapse Analytics #Data Lake #"ETL (Extract #Transform #Load)" #Azure SQL Database #Data Governance #Spark SQL #Data Integrity
Role description
Job Title: Data / SQL Engineer Location: Hybrid (3 days in office, 2 days remote) • • • Applicants must be authorized to work in the United States on a full-time basis without current or future employer sponsorship. • • • As a Data Engineer, you will be responsible for designing, developing, and maintaining robust and scalable database solutions to support business operations. This role requires strong expertise in SQL Server, database modeling, database design, and performance optimization. You will collaborate across teams to translate complex business requirements into effective technical solutions. Additionally, you will work on database design from the ground up and leverage tools like SSIS to support various data-related processes. The ideal candidate combines strong SQL development expertise with experience using Azure Databricks, Data Lake, and related Azure data services to deliver robust, efficient, and well-governed data solutions. Key Responsibilities • Design, develop, and maintain ETL/ELT data pipelines using Azure Databricks (PySpark, SQL, Delta Lake). • Build and optimize SQL queries, stored procedures, and views in Azure SQL Database or Azure Synapse Analytics. • Develop and manage data models, schemas, and tables to support business intelligence and data analytics. • Integrate data from diverse sources (on-premises, APIs, cloud storage) into Azure Data Lake Storage (ADLS Gen2). • Collaborate with analysts, data scientists, and business teams to define data requirements and ensure data integrity. • Implement data quality checks, logging, and monitoring for pipeline reliability and consistency. • Apply performance tuning techniques for SQL and Spark workloads to ensure efficient execution. • Automate deployment, orchestration, and monitoring using Azure Data Factory (ADF) and Azure DevOps. • Ensure adherence to data governance, security, and compliance standards (RBAC, encryption, lineage, etc.). Required Qualifications • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field. • 5–6+ years of professional experience as a Data Engineer or SQL Developer. • Proven experience developing and maintaining data pipelines using Azure Databricks. • Strong SQL development and optimization skills (T-SQL, stored procedures, query tuning). • Hands-on experience with Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). • Familiarity with Azure Synapse Analytics or Azure SQL Database. • Strong understanding of data warehousing, data modeling, and ETL best practices. • Experience with Git-based version control and basic DevOps practices for data deployments. • Proficiency in Python or PySpark for data transformation within Databricks. • Experience with Delta Lake and Medallion architecture (Bronze/Silver/Gold layers). • Knowledge of Power BI or similar BI tools for data visualization. • Understanding of Azure DevOps CI/CD pipelines and automation for data workflows. • Familiarity with data governance frameworks and Azure Purview / Microsoft Fabric (a plus).