TechPerm Incorporated

Data Scientist Databricks SME

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist Databricks SME, offering a contract of unspecified length with a pay rate of "[Pay range or salary or compensation]." Key skills include expert Python, Apache Spark, SQL, and AI foundations. A master's or PhD in a quantitative field and 5+ years of relevant experience are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
May 16, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Gwynn Oak, MD
-
🧠 - Skills detailed
#TensorFlow #Datasets #Deep Learning #Cloud #MLflow #Spark (Apache Spark) #Statistics #NumPy #SQL (Structured Query Language) #Data Processing #Data Manipulation #Distributed Computing #Scala #GCP (Google Cloud Platform) #Pandas #PyTorch #Big Data #Databricks #ML (Machine Learning) #Data Science #Computer Science #PySpark #Azure #AWS (Amazon Web Services) #NLP (Natural Language Processing) #Data Analysis #Python #Apache Spark #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence)
Role description
About the Company We are seeking a highly skilled Data Scientist with deep AI expertise to join our team. You will leverage the Databricks Data Intelligence Platform to design, build, and deploy advanced machine learning models and AI-driven solutions. This role is ideal for a Python expert who thrives at the intersection of big data, distributed computing, and cutting-edge artificial intelligence. About the Role This role involves leveraging advanced machine learning models and AI-driven solutions to address complex business problems. Responsibilities β€’ AI/ML Model Development: Design and train machine learning models using various algorithms, including deep learning, NLP, and computer vision. β€’ Databricks Orchestration: Build and optimize end-to-end AI/ML pipelines on Databricks, utilizing Unity Catalog for governance and MLflow for experiment tracking. β€’ Generative AI & LLMs: Implement advanced AI patterns such as Retrieval-Augmented Generation (RAG) and fine-tune pre-trained models for specific enterprise tasks. β€’ Python Expertise: Write production-quality, idiomatic PySpark and Python code that leverages Spark’s distributed nature. β€’ Collaboration: Partner with Engineering and Product teams to translate business problems into scalable analytical solutions. β€’ Insight Extraction: Perform exploratory data analysis (EDA) and extract meaningful insights from massive, complex datasets to drive strategic decisions. Qualifications β€’ Education: MS or PhD in a quantitative field such as Computer Science, Statistics, or Math. β€’ Experience: 5+ years of hands-on experience in data science or AI engineering in high-growth environments. Required Skills β€’ Technical Proficiency: Expert-level Python (pandas, NumPy, scikit-learn, PySpark). β€’ Extensive experience with Apache Spark for large-scale data processing. β€’ Proficiency in SQL for data manipulation and querying in Lakehouse environments. β€’ AI Foundations: Strong understanding of statistics, probability, and advanced ML lifecycle management (MLOps). Preferred Skills β€’ Experience with Deep Learning frameworks (TensorFlow, PyTorch). β€’ Familiarity with Cloud Platforms (AWS, Azure, or GCP) and their native data services. β€’ Databricks certifications, such as Databricks Certified Machine Learning Professional. Pay range and compensation package [Pay range or salary or compensation] Equal Opportunity Statement We are committed to diversity and inclusivity.