Zeektek

Data Engineer - Azure, Fabric, ETL, SQL, PySpark

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of over 6 months, offering a competitive pay rate. Key skills include Azure, ETL, SQL, and PySpark. A strong background in data pipeline development and database design is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
681
-
πŸ—“οΈ - Date
December 2, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Greater Sacramento
-
🧠 - Skills detailed
#Agile #DevOps #Quality Assurance #NoSQL #Scala #Data Governance #Azure cloud #BI (Business Intelligence) #GIT #Azure #PySpark #Jenkins #Data Engineering #Spark (Apache Spark) #SQL Server #Data Quality #Python #Storage #Documentation #Complex Queries #Data Warehouse #Oracle #Data Modeling #"ETL (Extract #Transform #Load)" #Data Lake #Databases #Data Ingestion #Security #Compliance #Database Design #Data Pipeline #Data Security #SQL (Structured Query Language) #Azure DevOps #Automation #Programming #Microservices #Deployment #Cloud
Role description
Job Title: Data Engineer Role Overview The Data Engineer plays a key role in designing, building, and maintaining data pipelines and infrastructure that support the organization’s analytics and reporting needs. This position focuses on converting raw data into actionable insights by ensuring data quality, consistency, security, and accessibility. The Data Engineer will collaborate closely with business data leads, BI engineers, and other stakeholders to deliver reliable, high-performing data solutions. Essential Responsibilities Data Pipeline Development β€’ Design and implement scalable, complex ETL (extract, transform, load) data pipelines from diverse sources such as databases, APIs, and files. β€’ Develop automated data ingestion, mock data generation, and validation processes. β€’ Optimize data pipelines for performance and efficiency. β€’ Maintain source control best practices (e.g., branch protection, Git-flow). β€’ Implement CI/CD pipelines, including build, test, and automated deployment (e.g., Jenkins, Travis CI, Azure DevOps). β€’ Demonstrate strong analytical and programming skills. Database Design and Tuning β€’ Ensure referential integrity and query optimization across systems. β€’ Design and maintain scalable database solutions that support high-performance analytics. Data Infrastructure Management β€’ Manage and maintain data warehouses, data lakes, and other storage solutions. β€’ Ensure data security, privacy, and compliance with applicable regulations. β€’ Monitor system performance and troubleshoot issues proactively. Data Modeling β€’ Develop and maintain data models that accurately represent business processes and entities. β€’ Support data governance initiatives by implementing and maintaining policies and standards. Data Quality Assurance β€’ Establish and implement data quality checks, validation rules, and error-handling procedures. β€’ Identify and resolve data inconsistencies or quality issues. Tool Selection and Implementation β€’ Evaluate, recommend, and implement modern data engineering tools and technologies. β€’ Maintain and manage data engineering platforms to ensure optimal performance. Collaboration and Agile Development β€’ Partner with business data leads, project managers, and stakeholders to define data requirements. β€’ Provide technical guidance and support for data-driven initiatives. β€’ Participate in agile development processes, including sprint planning, daily standups, and backlog grooming. β€’ Analyze business and technical requirements to develop documentation, designs, and test plans. β€’ Demonstrate flexibility by learning new technologies and contributing across the full stack when needed. Qualifications β€’ Preferred: Experience with Azure Fabric and Azure Cloud environments. β€’ Strong experience with relational databases (e.g., Oracle, SQL Server) and NoSQL databases. β€’ Advanced proficiency in SQL and experience with complex queries and optimization. β€’ Skilled in Python and PySpark for data engineering and automation. β€’ Proven experience in performance tuning of data pipelines and database queries. β€’ Solid understanding of DevOps, microservices, and modern application design principles. β€’ Strong analytical, quantitative, and problem-solving abilities. β€’ Excellent written and verbal communication skills; able to convey complex ideas through documentation and design diagrams. β€’ Education: High School Diploma or GED required; Bachelor’s degree preferred. β€’ Other Requirements: Valid driver’s license.