#261 Jonathan Frankle: How Databricks is Disrupting AI Model Training Podcast Por  arte de portada

#261 Jonathan Frankle: How Databricks is Disrupting AI Model Training

#261 Jonathan Frankle: How Databricks is Disrupting AI Model Training

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today’s innovative AI tech companies who upgraded to OCI…and saved.

Try OCI for free at http://oracle.com/eyeonai


What if you could fine-tune an AI model without any labeled data—and still outperform traditional training methods?

In this episode of Eye on AI, we sit down with Jonathan Frankle, Chief Scientist at Databricks and co-founder of MosaicML, to explore TAO (Test-time Adaptive Optimization)—Databricks’ breakthrough tuning method that’s transforming how enterprises build and scale large language models (LLMs).

Jonathan explains how TAO uses reinforcement learning and synthetic data to train models without the need for expensive, time-consuming annotation. We dive into how TAO compares to supervised fine-tuning, why Databricks built their own reward model (DBRM), and how this system allows for continual improvement, lower inference costs, and faster enterprise AI deployment.

Whether you're an AI researcher, enterprise leader, or someone curious about the future of model customization, this episode will change how you think about training and deploying AI.


Explore the latest breakthroughs in data and AI from Databricks: https://www.databricks.com/events/dataaisummit-2025-announcements


Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI

adbl_web_global_use_to_activate_webcro805_stickypopup
Todavía no hay opiniones