Qbtech

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract position offering $95.00 - $120.00 per hour, remote. Key skills include ETL development, cloud platforms (AWS, Azure), big data technologies (Hadoop, Spark), and strong SQL. Experience in data architecture and machine learning workflows is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
960
-
🗓️ - Date
December 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#Data Science #Data Engineering #Data Modeling #Data Processing #API (Application Programming Interface) #Cloud #Oracle #Databases #Data Quality #Data Extraction #Azure #Shell Scripting #Microsoft SQL #SQL Queries #Looker #Normalization #ML (Machine Learning) #Documentation #VBA (Visual Basic for Applications) #Code Reviews #SQL Server #Database Schema #Python #Data Integration #PySpark #Talend #Microsoft SQL Server #SQL (Structured Query Language) #Unix #Bash #Programming #MS SQL (Microsoft SQL Server) #Data Pipeline #Indexing #Datasets #Database Performance #Hadoop #S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #AWS (Amazon Web Services) #Agile #Data Warehouse #Informatica #Apache Hive #Big Data #"ETL (Extract #Transform #Load)" #Scripting #Automation #HDFS (Hadoop Distributed File System) #Scala #Visualization #Data Lake #Java #Data Architecture #BI (Business Intelligence)
Role description
Job OverviewWe are seeking an experienced Senior Data Engineer to join our dynamic data team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and data warehouses to support advanced analytics and business intelligence initiatives. This role requires a deep understanding of big data technologies, cloud platforms, and data modeling to enable efficient data flow and insightful analysis across the organization. The Senior Data Engineer will collaborate with cross-functional teams to implement best practices in data architecture, ensure data quality, and optimize data processing workflows. Duties Design, develop, and maintain robust ETL processes using tools such as Talend, Informatica, and custom scripting with Python, Bash, or Shell Scripting. Build and manage scalable data pipelines utilizing Hadoop, Spark, Apache Hive, Azure Data Lake, and AWS services to handle large volumes of structured and unstructured data. Develop and optimize complex SQL queries for data extraction, transformation, and loading across platforms including Microsoft SQL Server, Oracle, and other relational databases. Architect and implement efficient data models for data warehouses supporting analytics and reporting needs. Collaborate with data scientists to facilitate model training by preparing clean datasets and integrating machine learning workflows. Design database schemas and ensure optimal database performance through indexing, partitioning, and normalization techniques. Integrate diverse data sources such as Linked Data and RESTful APIs into centralized repositories for comprehensive analysis. Support agile development practices by participating in sprint planning, code reviews, and continuous integration efforts. Monitor system performance, troubleshoot issues promptly, and implement improvements to ensure high availability of data services. Contribute to documentation of architecture designs, workflows, and best practices for team knowledge sharing. Skills Extensive experience with cloud platforms such as AWS (including S3), Azure Data Lake, and related cloud services. Strong programming skills in Java, Python, VBA, Bash (Unix shell), and Shell Scripting for automation tasks. Proficiency with big data technologies including Hadoop ecosystem (HDFS), Spark (PySpark), Apache Hive, and related tools. Expertise in ETL development using Talend, Informatica or similar tools; strong SQL skills for complex query development across multiple databases like SQL Server and Oracle. Knowledge of modern data architecture concepts including Data Warehouse design, Linked Data integration, and database modeling techniques. Familiarity with analytics tools such as Looker for visualization purposes. Experience with RESTful API integration for seamless data exchange between systems. Understanding of model training processes within machine learning workflows is a plus. Strong analysis skills with the ability to interpret complex datasets to inform decision-making. Experience working within Agile methodologies to deliver iterative improvements efficiently. This role offers an exciting opportunity to work at the forefront of data engineering technology in a collaborative environment focused on innovation and excellence in analytics solutions. Job Type: Contract Pay: $95.00 - $120.00 per hour Work Location: Remote