Senior Data Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Consultant for a 6-month contract, paying $50.00 - $70.00 per hour. Required skills include 5+ years in data modeling, expertise in Cogndum and Treeno, and proficiency in Databricks, SQL, Python, and Java.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Columbus, OH
-
🧠 - Skills detailed
#Datasets #Complex Queries #Deployment #Model Deployment #Cloud #"ETL (Extract #Transform #Load)" #Data Lake #Documentation #Java #Data Analysis #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Databricks #Scala #Data Modeling #Python #SQL Queries #Programming #Jira #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Compliance #ML (Machine Learning) #Data Engineering #Agile #Data Processing #Jupyter #Data Wrangling #Redshift #Data Management #Security #Tableau #Data Science #Alteryx #Data Governance #Data Pipeline #Lambda (AWS Lambda)
Role description
Job Responsibilities We are building and standing up a modern data system to support enterprise-wide data management, analytics, and advanced reporting capabilities. The system will leverage data modeling, data lakes, and data science workflows to ensure scalability, reliability, and actionable insights. This role will involve working with cutting-edge technologies such as Cogndum, Treeno, Databricks, and modern data science platforms. Responsibilities Design, develop, and maintain data models using Cogndum and Treeno, ensuring they align with business and technical requirements. Build scalable data pipelines and ETL workflows using Databricks to ingest, process, and transform large datasets. Architect and manage Data Lakes for structured and unstructured data, ensuring data availability, integrity, and governance. Write and optimize SQL queries for data analysis, reporting, and integration across multiple systems. Develop and implement Python-based data science workflows, including preprocessing, modeling, and deployment of ML solutions. Use Java for back-end integrations, APIs, or high-performance data processing where required. Partner with business stakeholders, data scientists, and engineers to translate requirements into technical solutions. Ensure compliance with data governance, security, and regulatory standards. Prepare and maintain clear documentation of data models, workflows, and processes. Required Skills & Qualifications 5+ years of experience in data modeling, data engineering, or data analytics. Hands-on expertise in Data Modeling using Cogndum and Treeno. Strong experience with Databricks for building pipelines, transformations, and analytics. Proficiency in Data Lakes architecture and implementation. Advanced knowledge of SQL for complex queries, tuning, and reporting. Strong programming experience with Python (data wrangling, ML workflows) and Java (back-end integration, processing). Experience in data science workflows, including exploratory data analysis (EDA), feature engineering, and model integration. Excellent problem-solving, analytical, and communication skills. Nice to Have Experience working with AWS cloud services (S3, Redshift, Glue, EMR, Lambda). Familiarity with Jupyter Notebooks for prototyping, data analysis, and collaborative development. Exposure to AI/ML projects and model deployment pipelines. Experience in agile methodologies and tools such as Jira/Confluence. Job Type: Contract Pay: $50.00 - $70.00 per hour Benefits: 401(k) Dental insurance Health insurance Vision insurance Experience: Alteryx: 3 years (Required) Tableau: 3 years (Required) SQL: 3 years (Required) Work Location: In person