The AI Fundamentalists

By: Dr. Andrew Clark & Sid Mangalik
  • Summary

  • A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

    © 2024 The AI Fundamentalists
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • Model documentation: Beyond model cards and system cards in AI governance
    Nov 9 2024

    What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in fields like finance to their adaptation in Silicon Valley. Our discussion also highlights the important role of early modelers and statisticians in advocating for a complete approach that includes the entire model development lifecycle.

    Show Notes

    Model documentation origins and best practices (1:03)

    • Documenting a model is a comprehensive process that requires giving users and auditors clear understanding:
      • Why was the model built?
      • What data goes into a model?
      • How is the model implemented?
      • What does the model output?


    Model cards - pros and cons (7:33)

    • Model cards for model reporting, Association for Computing Machinery
    • Evolution from this research to Google's definition to today
    • How the market perceives them vs. what they are
    • Why the analogy “nutrition labels for models” needs a closer look


    System cards - pros and cons (12:03)

    • To their credit, OpenAI system cards somewhat bridge the gap between proper model documentation and a model card.
    • Contains complex descriptions of evaluation methodologies along with results; extra points for reporting red-teaming results
    • Represents 3rd-party opinions of the social and ethical implications of the release of the model


    Automating model documentation with generative AI (17:17)

    • Finding the balance in automation in a great governance strategy
    • Generative AI can provide an assist in editing and personal workflow


    Improving documentation for AI governance (23:11)

    • As model expert, engage from the beginning with writing the bulk of model documentation by hand.
    • The exercise of documenting your models solidifies your understanding of the model's goals, values, and methods for the business

    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    28 mins
  • New paths in AI: Rethinking LLMs and model risk strategies
    Oct 8 2024

    Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion.

    • Intro and news: The veto of California's AI Safety Bill (00:00:03)
      • Can state-specific AI regulations really protect consumers, or do they risk stifling innovation? (Gov. Newsome's response)
      • Veto highlights the critical need for risk-based regulations that don't rely solely on the size and cost of language models
      • Arguments to be made for a cohesive national framework that ensures consistent AI regulation across the United States
    • Are businesses ready to embrace large language models, or are they underestimating the challenges? (00:08:35)
      • The myth that acquiring a foundational model is a quick fix for productivity woes
      • The essential role of robust risk management strategies, especially in sensitive sectors handling personal data
      • Review of model cards, Open AI's system cards, and the importance of thorough testing, validation, and stricter regulations to prevent a false sense of security
      • Transparency alone is not enough; objective assessments are crucial for genuine progress in AI integration
    • From hallucinations in language models to ethical energy use, we tackle some of the most pressing problems in AI today (00:16:29)
      • Reinforcement learning with annotators and the controversial use of other models for review
      • Jan LeCun's energy systems and retrieval-augmented generation (RAG) offer intriguing alternatives that could reshape modeling approaches
    • The ethics of advancing AI technologies, consider the parallels with past monumental achievements and the responsible allocation of resources (00:26:49)
      • There is good news about developments and lessons learned from LLMs; but there is also a long way to go.
      • Our original predictions in episode 2 for LLMs still reigns true: “Reasonable expectations of LLMs: Where truth matters and risk tolerance is low, LLMs will not be a good fit”
      • With increased hype and awareness from LLMs came varying levels of interest in how all model types and their impacts are governed in a business.


    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    40 mins
  • Complex systems: What data science can learn from astrophysics with Rachel Losacco
    Sep 4 2024

    Our special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems.

    • Prologue: Why it's important to bring stats back [00:00:03]
      • Announcement from the American Statistical Association (ASA): Data Science Statement Updated to Include “ and AI”
    • Today's guest: Rachel Losacco [00:02:10]
      • Rachel is an astrophysicist who’s worked with major galaxy formation simulations for many years. She hails from Leiden (Lie-den) University and the University of Florida. As a Senior Data Scientist, she works on modeling road safety.
    • Defining complex systems through astrophysics [00:02:52]
      • Discussion about origins and adoption of complex systems
      • Difficulties with complex systems: Nonlinearity, chaos and randomness, collective dynamics and hierarchy, and emergence.
    • Complexities of nonlinear systems [00:08:20]
      • Linear models (Least Squares, GLMs, SVMs) can be incredibly powerful but they cannot model all possible functions (e.g. a decision boundary of concentric circles)
      • Non-linearity and how it exists in the natural world
    • Chaos and randomness [00:11:30]
      • Enter references to Jurassic Park and The Butterfly Effect
      • “In universe simulations, a change to a single parameter can govern if entire galaxy clusters will ever form” - Rachel
    • Collective dynamics and hierarchy [00:15:45]
      • Interactions between agents don’t occur globally and often is mediated through effects that only happen on specific sub-scales
      • Adaptation: components of systems breaking out of linear relationships between inputs and outputs to better serve the function of the greater system
    • Emergence and complexity [00:23:36]
      • New properties arise from the system that cannot be explained by the base rules governing the system
    • Examples in astrophysics [00:24:34]
      • These difficulties are parts of solving previously impossible problems
      • Consider this lecture from IIT Delhi on Complex Systems to get a sense of what is required to study and formalize a complex system and its collective dynamics (https://www.youtube.com/watch?v=yJ39ppgJlf0)
    • Consciousness and reasoning from a new point of view [00:31:45]
      • Non-linearity, hierarchy, feedback loops, and emergence may be ways to study consciousness. The brain is a complex system that a simple set of rules cannot fully define.
      • See: Brain modeling from scratch of C. Elgans



    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    Show more Show less
    41 mins

What listeners say about The AI Fundamentalists

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.