• 14. External data - use with care
    Nov 12 2024

    Spoken (by a human) version of this article.

    Banks and insurers are increasingly using external data; using them beyond their intended purpose can be risky (e.g. discriminatory).

    Emerging regulations and regulatory guidance emphasise the need for active oversight by boards, senior management to ensure responsible use of external data.

    Keeping the customer top of mind, asking the right questions, and focusing on the intended purpose of the data, can help reduce the risk.

    Law and guideline mentioned in the article:

    • Colorado's External Consumer Data and Information Sources (ECDIS) law
    • New York's proposed circular letter.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    7 mins
  • 13. Bridging the purpose-risk gap: Customer-first algorithmic risk assessments
    Nov 5 2024

    Spoken (by a human) version of this article.

    Banks and insurers sometimes lose sight of their customer-centric purpose when assessing AI/algorithm risks, focusing instead on regular business risks and regulatory concerns.

    Regulators are noticing this disconnect.

    This article aims to outline why the disconnect happens and how we can fix it.

    Report mentioned in the article: ASIC, REP 798 Beware the gap: Governance arrangements in the face of AI innovation.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    7 mins
  • 12. Risk-Focused Principles for Change Control in Algorithmic Systems
    Oct 29 2024

    Spoken (by a human) version of this article.

    With algorithmic systems, an change can trigger a cascade of unintended consequences, potentially compromising fairness, accountability, and public trust.

    So, managing changes is important. But if you use the wrong framework, your change control process may tick the boxes, but be both ineffective and inefficient.

    This article outlines a potential solution: a risk focused, principles-based approach to change control for algorithmic systems.

    Resource mentioned in the article: ISA 315 guideline for general IT controls.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    12 mins
  • 11. Deprovisioning User Access to Maintain Algorithm Integrity
    Oct 22 2024

    Spoken (by a human) version of this article.

    The integrity of algorithmic systems goes beyond accuracy and fairness.

    In Episode 4, we outlined 10 key aspects of algorithm integrity.

    Number 5 in that list (not in order of importance) is Security: the algorithmic system needs to be protected from unauthorised access, manipulation and exploitation.

    In this episode, we explore one important sub-component of this: deprovisioning user access.

    Link from article: U.S. National Coordinator for Critical Infrastructure Security and Resilience (CISA) advisory.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    9 mins
  • 10. Fairness reviews: identifying essential attributes
    Oct 15 2024

    Spoken (by a human) version of this article.

    When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:

    What are the personal characteristics or attributes that, if used, could lead to discrimination?


    This article provides a basic framework for identifying and categorising these attributes.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    7 mins
  • 9. Algorithmic Integrity: Don't wait for legislation
    Oct 8 2024

    Spoken (by a human) version of this article.

    Legislation isn't the silver bullet for algorithmic integrity.

    Are they useful? Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply.

    However:

    • existing legislation may already apply
    • new algorithm-focused laws can be too narrow or quickly outdated
    • standards can be confusing, and may not cover what we need
    • "best practice" frameworks help, but they're not always the best (and there are several, so they can't all be "best").

    In short, they are helpful.

    But we need to know what we're getting - what they cover, don't cover, etc.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    11 mins
  • 8. A Balanced Focus on New and Established Algorithms
    Oct 1 2024

    Spoken (by a human) version of this article.

    Even in discussions among AI governance professionals, there seems to be a silent “gen” before AI.

    With rapid progress - or rather prominence – of generative AI capabilities, these have taken centre stage.

    Amidst this excitement, we mustn't lose sight of the established algorithms and data-enabled workflows driving core business decisions. These range from simple rules-based systems to complex machine learning models, each playing a crucial role in our operations.

    In this episode, we'll examine why we need to keep an eye on established algorithmic systems, and how.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    9 mins
  • 7. Postcodes: Hidden Proxies for Protected Attributes
    Sep 24 2024

    Spoken (by a human) version of this article.

    In a previous article, we discussed algorithmic fairness, and how seemingly neutral data points can become proxies for protected attributes.

    In this article, we'll explore a concrete example of a proxy used in insurance and banking algorithms: postcodes.

    We've used Australian terminology and data. But the concept will apply to most countries.

    Using Australian Bureau of Statistics (ABS) Census data, it aims to demonstrate how postcodes can serve as hidden proxies for gender, disability status and citizenship.

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show more Show less
    12 mins