Bigdata Technical Lead Level 3 _ Cincinnati, OH, or Charlotte, NC,

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Bigdata Technical Lead Level 3 in Cincinnati, OH, or Charlotte, NC, offering a contract-to-hire position at $80/hr. Requires 10+ years in data engineering, expertise in Azure Databricks, and strong leadership skills.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
640
-
πŸ—“οΈ - Date discovered
May 21, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Agile #Azure Synapse Analytics #Python #Informatica #Azure Data Factory #SQL Queries #Automation #Microservices #Programming #ML (Machine Learning) #API (Application Programming Interface) #Security #Storage #Indexing #GraphQL #Data Architecture #Azure #ADF (Azure Data Factory) #Cloud #Data Science #Logging #Data Engineering #Data Integration #Azure SQL #"ETL (Extract #Transform #Load)" #Apache Spark #Oracle #Data Security #GDPR (General Data Protection Regulation) #REST (Representational State Transfer) #Delta Lake #Monitoring #Databricks #Synapse #Spark (Apache Spark) #Kubernetes #Big Data #SQL (Structured Query Language) #Terraform #DevOps #Data Access #Kanban #Data Processing #Scrum #Azure Databricks #Compliance #Data Privacy #Informatica PowerCenter #Scala #BI (Business Intelligence) #Leadership #Requirements Gathering #Data Governance #Deployment #Strategy #Data Lake #IICS (Informatica Intelligent Cloud Services) #Data Pipeline #Microsoft Azure #Docker #Migration
Role description
Bigdata Lead Level 3 USC/GC Proximity to our Cincinnati, OH, or Charlotte, NC, office, with the ability to work in a hybrid model (in-office Monday to Thursday). Job Summary We are seeking a seasoned Databricks Technical Lead to join our HR Systems Engineering team. This role is pivotal in enhancing the experience of over 420,000 associates by leading the design, build, and optimization of our data platform, services, APIs, and cloud migrations. As a product-centric role within an agile delivery framework, you will ensure our data solutions align with business objectives and deliver tangible value. Join us in an organization recognized as one of the best places to work in IT for seven consecutive years. Required Qualifications 10+ years of experience in data engineering, big data, or API development, with at least 3+ years in a leadership role. Proven experience leading product-centric data engineering initiatives in an agile delivery environment. Expertise in Azure Databricks, Apache Spark, Azure SQL, and other Microsoft Azure services. Strong programming skills in Python, Scala, and SQL for data processing and API development. Experience in building and managing APIs (REST, GraphQL, gRPC) and microservices. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, and Delta Lake. Proficiency in CI/CD pipelines, Terraform/Bicep, and Infrastructure-as-Code. Experience with data security and compliance measures (e.g., encryption, access control, auditing) for sensitive HR and employee data. Strong problem-solving skills, with a focus on performance tuning, security, and cost optimization. Experience with containerization (Docker, Kubernetes) and event-driven architecture is a plus. Exposure to Informatica for ETL/ELT and data integration. Excellent communication and leadership skills in a fast-paced environment. This is a contract-to-hire position, and candidates must be prepared to transition to a full-time employee role. Preferred Qualifications Microsoft Certified: Azure Solutions Architect Expert or Databricks Certified Data Engineer/Architect certification. Experience with agile development methodologies such as Scrum or SAFe. Familiarity with machine learning workflows in Azure Databricks. Knowledge of Azure API Management and Event Hub for API integration. Experience with Informatica PowerCenter or Informatica Intelligent Cloud Services (IICS). Hands-on experience with Oracle HCM database models and APIs, including integrating this data into enterprise data solutions Experience with HR Analytics. Proximity to our Cincinnati, OH, or Charlotte, NC, office, with the ability to work in a hybrid model (in-office Monday to Thursday). Key Responsibilities Core Responsibilities Develop and maintain the HR data domain in a secure, compliant, and efficient manner in accordance with best practices. Lead the development of a data engineering team responsible for designing scalable, high performance data solutions, APIs, and microservices in Azure Databricks, Azure SQL and Informatica. Ensure the highest levels of security and privacy regarding sensitive data. This is a job for an exceptional professional who deeply understands big data processing, data architecture, cloud migrations, API development, data security, and agile methodologies in the Azure ecosystem. Key Responsibilities Azure Databricks & Big Data Architecture: Design and implement scalable data pipelines and architectures on Azure Databricks. Optimize ETL/ELT workflows, ensuring efficiency in data processing, storage, and retrieval. Leverage Apache Spark, Delta Lake, and Azure-native services to build high-performance data solutions. Ensure best practices in data governance, security, and compliance within Azure environments. Troubleshoot and fine-tune Spark jobs for optimal performance and cost efficiency. Azure SQL & Cloud Migration Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads. Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data from Azure SQL into Databricks Delta Lake. Optimize Azure SQL queries and indexing strategies before migration to enhance performance in Databricks. Implement best practices for data governance, security, and compliance throughout the migration process. Work with Azure Data Factory (ADF), Informatica, and Databricks to automate and orchestrate migration workflows. Ensure seamless integration of migrated data with APIs, machine learning models, and business intelligence tools. Establish performance monitoring and cost-optimization strategies post-migration to ensure efficiency. API & Services Development Design and develop RESTful APIs and microservices for seamless data access and integrations. Implement scalable and secure API frameworks to expose data processing capabilities. Work with GraphQL, gRPC, or streaming APIs for real-time data consumption. Integrate APIs with Azure-based data lakes, warehouses, Oracle HCM, and other enterprise applications. Ensure API performance, monitoring, and security best practices (OAuth, JWT, Azure API Management). HR Data Domain & Security Build and manage the HR data domain, ensuring a scalable, well-governed, and secure data architecture. Implement role-based access control (RBAC), encryption, and data masking to protect sensitive employee information. Ensure compliance with GDPR, CCPA, HIPAA, and other data privacy regulations. Design and implement audit logging and monitoring to track data access and modifications. Work closely with HR and security teams to define data retention policies, access permissions, and data anonymization strategies. Enable secure API and data sharing mechanisms for HR analytics and reporting while protecting employee privacy. Work with Oracle HCM data structures and integrate them within the Azure Databricks ecosystem. Product-Centric & Agile Delivery Drive a product-centric approach to data engineering, ensuring alignment with business objectives and user needs. Work within an agile delivery framework, leveraging Scrum/Kanban methodologies to ensure fast, iterative deployments. Partner with product managers and business stakeholders to define data-driven use cases and prioritize backlog items. Promote a continuous improvement mindset, leveraging feedback loops and data-driven decision-making. Implement DevOps and CI/CD best practices to enable rapid deployment and iteration of data solutions. Leadership & Collaboration Provide technical leadership and mentorship to a team of data engineers and developers. Collaborate closely with business stakeholders, product managers, HR teams, and architects to translate requirements into actionable data solutions. Advocate for automation, DevOps, and Infrastructure-as-Code (Terraform, Bicep) to improve efficiency. Foster a culture of innovation and continuous learning within the data engineering team. Stay updated on emerging trends in Azure Databricks, Azure SQL, Informatica, Oracle HCM, and cloud technologies. Ad Server Product Manager Level 2 $80/hr max Job Description This team will play an integral part in advancing our advertising solutions by implementing Machine Learning models into our Ad Serving Process to better meet shopper and advertiser needs. This includes developing an automated and scalable pipeline to leverage machine learning algorithms, alongside Data Science and Engineering teams. This is a long-term effort with multiple phases. It provides diverse opportunities for team members to learn and grow with the work. The systems involved are key revenue drivers for the organization. Team members could work in a collaborative environment while delivering impactful capabilities. Key Responsibilities Responsibilities Include β€’ Develop & manage a short/long term roadmap & strategy to scale machine learning models in production through an automated pipeline β€’ Understand & communicate business objectives and current technical infrastructure to ensure deployments are scalable and stable β€’ Outline customer / business problems and technical challenges associated with the scope of work to aide in discovery and development β€’ Contribute to / lead discovery & inception sessions to assess scope of work β€’ Utilize data and understanding of business objectives, stakeholder feedback, and technical dependencies to prioritize work across deploying, monitoring, and maintaining pipeline β€’ Maintain relationships with stakeholders to instill open lines of communication to share feedback, learning, and objectives β€’ Coordinate releases with core teams to ensure a seamless deployment that is on time and meets requirements β€’ Continuously monitor performance to ensure models continue to meet KPIs over time and adapt, as needed β€’ Lead requirements gathering and management of backlog, alongside development team