Raas Infotek

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Cloud Data Engineer, 12+ months contract, remote (must work EST). Requires 5+ years in data engineering, expertise in Azure services, and proficiency in Python, SQL, and big data technologies. Bachelor’s degree required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 10, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Hadoop #Azure #Azure cloud #Data Warehouse #Spark (Apache Spark) #Scala #NoSQL #Python #Automation #Compliance #Cloud #ADF (Azure Data Factory) #Azure DevOps #"ETL (Extract #Transform #Load)" #Synapse #DevOps #Databricks #Documentation #Data Lake #Leadership #Azure Data Factory #Data Security #Data Ingestion #Data Engineering #Data Pipeline #Security #Data Integration #SQL (Structured Query Language) #Computer Science #Big Data #Data Modeling #Databases
Role description
Job Title: Senior Cloud Data Engineer Location: McLean, VA preferred – Remote (Must work EST hours) Duration: 12+ Months (W2 Contract – No C2C) Overview: We are seeking a Senior Cloud Data Engineer to design, build, and optimize modern data pipelines and architectures on the Azure cloud platform. The ideal candidate will have deep expertise in data engineering, advanced knowledge of Azure services, and hands-on experience with big data technologies to drive scalable and high-performing data solutions. Key Responsibilities: • Develop and maintain scalable data pipelines, ETL/ELT workflows, and data integration solutions in the cloud. • Design, implement, and optimize data lake and data warehouse architectures. • Collaborate with architects and business teams to ensure data solutions meet performance, security, and compliance requirements. • Build and manage data ingestion frameworks using Spark, Databricks, or similar tools. • Monitor and enhance data workflows for performance and cost efficiency. • Document data flow diagrams, architecture, and system processes. Required Qualifications: • Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field. • 5+ years of experience in data engineering, including large-scale ETL development and data pipeline management. • Proven expertise with Azure Data Factory, Azure Synapse, Azure Data Lake, and Databricks. • Strong proficiency in Python and SQL for data transformation and automation. • Hands-on experience with Spark, Hadoop, or MapReduce. • Deep understanding of data modeling, relational and NoSQL databases, and data partitioning strategies. Preferred Qualifications: • Experience implementing data security, privacy, and governance frameworks. • Familiarity with CI/CD pipelines, Azure DevOps, and containerization tools. • Strong troubleshooting, performance tuning, and analytical skills. • Excellent communication and documentation skills (Visio, Lucidchart, etc.). Skills: Azure Data Engineering • Databricks • Spark • Python • SQL • Data Modeling • ETL • Data Lake/Warehouse • Cloud Architecture • Performance Optimization • Communication & Leadership