Raas Infotek

Data Lead

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Lead with a 12-month contract, remote work location, offering competitive pay. Requires 10+ years in IT, 4+ years in data engineering, expertise in Azure tools, strong SQL skills, and a degree in a related field.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 9, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Jenkins #Security #Metadata #Spark SQL #Data Orchestration #Data Quality #Databricks #Big Data #Redshift #Batch #Data Management #Data Modeling #Data Lifecycle #Azure DevOps #Data Governance #Amazon Redshift #Azure Data Factory #GitHub #Monitoring #OBIEE (Oracle Business Intelligence Enterprise Edition) #ADF (Azure Data Factory) #SQL (Structured Query Language) #Data Processing #Tableau #Visualization #Apache Spark #Data Architecture #Scala #Snowflake #"ETL (Extract #Transform #Load)" #Python #Azure Event Hubs #Data Science #Spark (Apache Spark) #Synapse #Documentation #Azure #Data Engineering #Data Lake #Kafka (Apache Kafka) #DevOps #Microsoft Power BI #Data Ingestion #Data Warehouse #PySpark #Azure SQL #Computer Science #Talend #BI (Business Intelligence) #Informatica #Deployment #Data Pipeline #Azure Databricks
Role description
Only USC/GC for W2 Position: Data Lead Location: Remote Contract Job Description: β€’ 10 + years of IT experience in the Development/ Architecture Role β€’ 4+ years of Proven experience in Data Engineering and architecture roles.(with Azure Data Factory , Azure Databricks & PySpark, Azure Synapse and Azure SQL). Qualifications: β€’ Bachelor’s or master’s degree in computer science, Engineering, or a related field. Skill Requirements: β€’ Strong Experience with ETL/ELT tools like ADF, Informatica , Talend etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon redshift , Snowflake , Google Big Query etc. β€’ Strong hands-on experience with Azure Data Factory (ADF) for data orchestration (for building and managing pipelines), Azure Databricks for big data processing and analytics, and Apache Spark for distributed data processing). β€’ Adept in with big data tools(Databricks , Spark etc..) β€’ Experience with Power BI, Tableau/OBIEE etc. β€’ Proficiency in PySpark, Python, and Spark SQL. β€’ Experience with CI/CD pipelines for data platforms. β€’ Solid understanding of data warehouse best practices, development standards, and methodologies.. β€’ Strong expertise in SQL and relational data modeling. β€’ Experience designing data lakes, data warehouses, or Lakehouse architectures. β€’ Good understanding of data governance, metadata management, and security best practices. β€’ Solid understanding of distributed data processing and performance tuning. β€’ Experience with ETL/ELT patterns and best practices. β€’ Strong analytical, problem-solving skills and ability to work in a fast-paced, dynamic environment. β€’ Excellent communication and documentation skills. Key Responsibilities: β€’ Design and implement end-to-end data architecture for batch and real-time data processing to implement enterprise-grade data solutions using Azure and Microsoft Fabric. β€’ Design, build, and optimize ETL/ELT pipelines for ingestion, transformation, and publishing using ADF, Spark, and Databricks. β€’ Architect and optimize data pipelines using Azure Data Factory (ADF). β€’ Develop and manage scalable data processing frameworks using Databricks and Apache Spark. β€’ Develop efficient data transformation logic using PySpark and Spark SQL. β€’ Build reusable, high-performance data models for analytics and reporting. β€’ Develop and maintain data ingestion, transformation, and orchestration workflows. β€’ Ensure data quality, consistency, security, and governance across platforms. β€’ Define and enforce data engineering best practices, including CI/CD, versioning, testing, and monitoring. β€’ Mentor and guide data engineers, fostering a culture of innovation and excellence. β€’ Collaborate with data engineers, analysts, data scientists, and business stakeholders. β€’ Evaluate and integrate new tools and technologies to enhance data capabilities. β€’ Own the data lifecycle, from ingestion and transformation to consumption and visualization. β€’ Support production deployments, monitoring, and troubleshooting of data pipelines. β€’ Document data architecture, processes, and best practices. Nice to Have: β€’ DevOps & CI/CD: Azure DevOps, GitHub Actions, Jenkins β€’ Real-time streaming (Azure Event Hubs, Kafka), Microsoft Fabric -- Thanks Prashant Bansal Raas infotek corporation 262 Chapman road, Suite 105A, Newark, DE-19702 Phone: 302-565-0188 Ext: 144, Email: Prashant.bansal@raasinfotek.com Website: raasinfotek.com