Targeting AI

By: TechTarget Editorial
  • Summary

  • Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.
    Copyright 2023 All rights reserved.
    Show more Show less
activate_WEBCRO358_DT_T2
Episodes
  • AT&T's David C. Williams on how generative AI will force diversity in AI systems
    Sep 3 2024

    The growth of generative AI has put diversity front and center.

    In the last year, there have been concerns that GenAI systems such as ChatGPT and Google Gemini are not trained with enough diverse data sets.

    For instance, the introduction of the Lensa app two years ago allowed people of color to generate avatars of themselves. Concerns were raised, however, after some users said Lensa's generated images changed their skin color.

    Incidents with AI tools like Lensa show that AI creators might not have enough diversity in their data set.

    Alternatively, there have also been incidents where it's clear that AI systems misrepresented diversity. For example, Google shut down Gemini's image generator earlier this year after users started generating inaccurate depictions of historical figures. For example, it generated images of well-known white people, such as the Pope, as Black people.

    Google has since opened the model back up. Last week, the cloud provider revealed that its new AI model, Imagen 3, will be rolled out to its Gemini AI model. The model will produce images of people again but won't support generation of photorealistic identifiable individuals.

    Despite the hiccup in the beginning stages of the technology, hope exists, said David C. Williams, assistant vice president of automation at AT&T.

    While Williams leads a team that previously used RPA, or robotics process automation, to drive business needs at AT&T, the team is now pivoting to generative AI. The shift has given Williams a view of how GenAI could affect diversity.

    "Generative AI is going to force diversity," Williams said on the latest Targeting AI episode.

    Cloud providers such as Google must include diversity in their data sets because not having it could lead to alienation from people of color, he continued. If creators of these systems fail to have diverse systems that show representation, that could lead many people of color to simply stop using the systems, which won't help their business.

    On the other hand, people of color and women will gain new opportunities because of generative AI.

    "Those that embrace generative AI and figure out how to use it in the workplace will have an incredibly different value proposition than the rest," Williams said.

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    28 mins
  • Generative AI fuels growth of online deepfakes threatening organizations and election integrity
    Aug 19 2024

    The growth of deepfakes in the past few years is a threat to not only organizations but also the U.S. general election in November.

    Information security vendor Pindrop saw a sharp rise in deepfakes in the first few months of the year compared to the previous year.

    Deepfakes of Vice President Kamala Harris, former President Donald Trump, President Joe Biden and state-level candidates have circulated in the runup to the November U.S. general election.

    "Last year, we were seeing about one deepfake every single month," Vijay Balasubramaniyan, co-founder and CEO at Pindrop, said on the Targeting AI podcast. "Starting this year ... we started seeing a deepfake every single day across every single customer."

    A big reason for the stark increase is the growth of generative AI systems and voice cloning apps. Meanwhile, many people can't distinguish between a deepfake voice and an authentic one.

    While about 120 voice cloning apps were on the market last year, this year users (both legitimate and illegitimate) can choose among more than 350 voice cloning apps.

    Moreover, Balasubramaniyan said, fraudsters are using generative AI technology to scale their attacks.

    For example, generative AI systems can create deepfakes in many different languages -- a series of large language models from Meta can translate some 4,000 languages. Fraudsters can use these systems to create deepfakes that can respond to questions depending on which words are spoken.

    "They have managed to scale their attacks in massive ways, and in ways that we have not seen before generative AI. We're seeing that now," Balasubramaniyan said.

    The massive progression of deepfake technology means organizations must remain aware and vigilant, said Harman Kaur, vice president of AI at Tanium, on the podcast. Tanium is a cybersecurity and management vendor based in Kirkland, Wash.

    "You have to have a plan to respond," Kaur said. "Do you have the tools to understand what type of threat has been invited into your network, and do you have the tools to fix it?"

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    42 mins
  • Examining the tech stances of Kamala Harris and Donald Trump, plus J.D. Vance
    Aug 5 2024

    Democratic presidential candidate Kamala Harris is a product of two decades of California politics who has longstanding ties to the tech and AI communities in her home state.

    But in her role as President Joe Biden's vice president during the past four years, Harris was tasked with overseeing Biden's executive order on AI, with its emphasis on government regulation. And it was she who hosted leaders of tech giants at the White House last year and secured pledges from them to focus on AI safety.

    In sharp contrast is the GOP presidential nominee, Donald Trump.

    While Trump's running mate, Senator J.D. Vance (R-Ohio), has a background in tech venture capital, Trump himself has no tech experience but backs a largely hands-off approach to tech and AI companies.

    In simple terms, Trump is anti-regulation, while Harris favors a moderate regulatory stance on big tech and the suddenly emergent generative AI sector, a view that roughly parallels that of Biden.

    In this episode of the Targeting AI podcast from TechTarget Editorial, three commentators on the confluence of tech and AI and politics registered their analyses of the complex dynamics of the likely Harris-Trump faceoff.

    Makenzie Holland, big tech and federal regulation senior news writer at TechTarget, emphasized that "there is a huge focus from the Biden-Harris administration on AI safety and trustworthiness."

    Meanwhile, "we've obviously seen Trump attack the executive order," she noted.

    For R "Ray" Wang, founder and CEO of Constellation Research, the choice for the tech industry is fairly clear.

    "I stress the libertarian view because I think that's important to understand that tech doesn't necessarily want to be governed," Wang said.

    The other guest on the podcast, Darrell West, a senior fellow in the Governance Studies program at the Brookings Institute, has authored a book about policy making in the AI era. He also pointed out the marked divergence of Harris and Trump on tech and AI issues.

    "Even though she historically has been close to the tech sector, I actually think she will maintain Biden's tough line on a lot of issues because that's where the party is these days," West said. "And also that's where public opinion is on many tech issues."

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

    Show more Show less
    1 hr and 6 mins

What listeners say about Targeting AI

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.