Anblicks

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "$/hour." It requires expertise in SQL Server, Azure Databricks, and cloud data engineering, with 12+ years of relevant experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 1, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Indexing #ADF (Azure Data Factory) #Code Reviews #Data Processing #Observability #Snowflake #AWS (Amazon Web Services) #Databases #SQL Server #Deployment #Azure Blob Storage #Azure Data Factory #Computer Science #Data Lake #Data Engineering #Cloud #Storage #Compliance #ML (Machine Learning) #Data Architecture #DynamoDB #Data Governance #Data Quality #MongoDB #Data Modeling #Data Pipeline #"ETL (Extract #Transform #Load)" #Schema Design #Azure Cosmos DB #Azure Databricks #Security #Apache Airflow #Scrum #Databricks #NoSQL #Data Security #Airflow #Scala #SQL (Structured Query Language) #Agile #Monitoring #Version Control #Azure
Role description
Job Summary: We are seeking a Senior Data Engineer to design, build, and optimize scalable data platforms and pipelines that power analytics, reporting, and data-driven decision-making. This is a highly technical, hands-on role focused on data architecture, performance, reliability, and security across relational, NoSQL, and cloud-native storage systems. The ideal candidate brings deep experience in data engineering and software development and leads primarily through technical ownership and influence. Key Responsibilities: β€’ Design, develop, and maintain scalable data pipelines and architectures using SQL Server, Azure Databricks, Azure Cosmos DB, and Azure Blob Storage. β€’ Build and optimize ETL/ELT workflows to ingest data from transactional systems, APIs, and streaming sources into analytical data stores. β€’ Develop and maintain data models for both relational (SQL Server, Snowflake) and NoSQL platforms (Cosmos DB), optimized for performance and analytical use cases. β€’ Implement data quality, validation, and monitoring frameworks to ensure accuracy, reliability, and observability across pipelines. β€’ Optimize query performance, indexing strategies, partitioning, and storage costs across SQL Server and cloud data platforms. β€’ Implement and enforce data security and compliance best practices, including role- based access control, encryption, and secure data handling. β€’ Collaborate with analytics, application, and platform engineering teams to translate business requirements into robust technical solutions. β€’ Build and maintain orchestration workflows using tools such as Azure Data Factory and Apache Airflow. β€’ Support CI/CD pipelines for data solutions, including version control, automated testing, and deployment. β€’ Evaluate and adopt new data technologies and patterns to continuously improve platform scalability and resilience. β€’ Provide technical guidance and mentorship through code reviews, design discussions, and best-practice advocacy. Qualifications: β€’ Bachelor’s degree in computer science, Engineering, or a related field. β€’ 12+ years of experience in software development, with substantial hands-on experience in data engineering. β€’ Strong expertise in SQL Server, including schema design, performance tuning, indexing, and stored procedures. β€’ Hands-on experience with Azure Cosmos DB (data modeling, partitioning, throughput optimization). β€’ Experience working with Azure Blob Storage and cloud-based data lakes. β€’ Proficiency with Databricks, NoSQL databases (Cosmos DB, MongoDB, DynamoDB), and distributed data processing. β€’ Deep understanding of ETL/ELT, data warehousing, and data modeling concepts. β€’ Experience with cloud platforms, primarily Azure (AWS exposure a plus). β€’ Hands-on experience with data pipeline orchestration tools such as Azure Data Factory or Apache Airflow. β€’ Strong problem-solving skills and attention to detail, with a focus on scalability and reliability. β€’ Ability to clearly communicate technical concepts to both technical and non- technical stakeholders. Preferred Qualifications: β€’ Experience preparing and serving data for machine learning and advanced analytics use cases. β€’ Background in building and operating large-scale, distributed data environments. β€’ Experience working in Agile/Scrum teams. β€’ Familiarity with data governance, lineage, and observability tools in regulated β€’ environments.