irResponsible AI

By: Upol Ehsan Shea Brown
  • Summary

  • Welcome to irResponsible AI —a series where you find out how NOT to end up on the New York Times headlines for all the wrong reasons!

    💡 Why are we doing this? As experts, we are tired of the boring “mainstream corporate” RAI communication. Here, we give it to you straight.

    ⁉️ Why call it Irresponsible AI? Responsible AI exists because of irresponsible AI. Knowing what NOT to do can be, at times, more actionable than knowing what to do.

    🎙️Who are your hosts? Why should you even bother to listen?
    Upol Ehsan makes AI systems explainable & responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are the hosts’ personal opinions.

    Follow for more Responsible AI:

    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    © 2024 irResponsible AI
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • 🔥 Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01
    Jul 29 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode of irResponsible AI, we discuss
    ✅ GenAI is cool, but do you really need it for your use case?
    ✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
    ✅ How may we get out of this problem?

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    00:00 - Introduction
    01:28 - Misuse of Generative AI
    02:27 - Glue example of google gen AI
    03:18 - The Challenge of Public Trust and Misinformation
    03:45 - Why is this a serious problem?
    04:49 - Why should businesses need to worry about it?
    05:32 - Auditing Generative AI Systems and Liability Risks
    07:18 - Why is this GenAI hype happening?
    09:20 - Competitive Pressure and Funding Influence
    14:29 - How to avoid failure: investing in Problem Understanding
    14:48 - Good use cases of GenAI
    17:05 - LLMs are only useful if you know the answer
    17:30 - Text-based based video editing as a good example
    21:40 - Need for GenAI literacy amongst tech execs
    23:30 - Takeaways


    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    27 mins
  • 🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile:
    ✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for
    ✅ Unpack challenges of evaluating AI frameworks
    ✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice.

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    00:00 - What will we discuss in this episode?
    01:22 - What are AI Risk Management Frameworks
    03:03 - Understanding NIST's Generative AI Profile
    04:00 - What's the difference between NIST's AI RMF vs GenAI Profile?
    08:38 - What are other equivalent AI RMFs?
    10:00- How we engage with AI Risk Management Frameworks?
    14:28 - Evaluating the Effectiveness of Frameworks
    17:20 - Challenges of Framework Evaluation
    21:05 - Evaluation Metrics are NOT always quantitative
    22:32 - Frameworks are inert-- they need to be activated
    24:40 - The Gap of Implementing a Framework in Practice
    26:45 - User-centered Design solutions to address the gap
    28:36 - Consensus-based framework creation is a chaotic process
    30:40 - Tip for small businesses to amplify profile in RAI
    31:30 - Takeaways


    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    35 mins
  • 🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01
    Jun 4 2024

    Got questions or comments or topics you want us to cover? Text us!

    In this episode filled with hot takes, Upol and Shea discuss three things:
    ✅ How the Gemini Scandal unfolded
    ✅ Is Responsible AI is too woke? Or is there a hidden agenda?
    ✅ What companies can do to address such scandals

    What can you do?
    🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.

    🎙️Who are your hosts and why should you even bother to listen?
    Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.

    Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

    All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.

    Follow us for more Responsible AI and the occasional sh*tposting:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    CHAPTERS:
    0:00 - Introduction
    1:25 - How the Gemini Scandal unfolded
    5:30 - Selective outrage: hidden social justice warriors?
    7:44 - Should we expect Generative AI to be historically accurate?
    11:53 - Responsible AI is NOT the icing on the cake
    14:58 - How Google and other companies should respond
    16:46 - Immature Responsible AI leads to irresponsible AI
    19:54 - Is Responsible AI too woke?
    22:00 - Identity politics in Responsible AI
    23:21 - What can tech companies do to solve this problem?
    26:43 - Responsible AI is a process, not a product
    28:54 - The key takeaways from the episode

    #ResponsibleAI #ExplainableAI #podcasts #aiethics

    Support the Show.

    What can you do?
    🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

    Follow us for more Responsible AI:
    Upol: https://twitter.com/UpolEhsan
    Shea: https://www.linkedin.com/in/shea-brown-26050465/

    Show more Show less
    31 mins

What listeners say about irResponsible AI

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.