Episodes

  • Google head of product on generative AI strategy
    Sep 16 2024

    As one of the top cloud providers, Google Cloud also stands at the forefront of the generative AI market.

    Over the past two years, Google has been enmeshed in a push and pull with its chief competitors -- AWS, Microsoft and OpenAI -- in the race to dominate generative AI.

    Google has introduced a slate of new generative AI products in the past year, including its main proprietary large language model (LLM), Gemini and the Vertex AI Model Garden. Last week, it also debuted Audio Overview, which turns documents into audio discussions.

    The tech giant has also faced criticism that it might be falling behind on generative AI challenges such as the malfunctioning of its initial image generator.

    Part of Google's strategy with generative AI is not only providing the technology through its own LLMs and those of many other vendors in the Model Garden, but also constantly advancing generative AI, said Warren Barkley, head of product at Google for Vertex AI, GenAI and machine learning, on the Targeting AI podcast from TechTarget Editorial.

    "A lot of what we did in the early days, and we continue to do now is … make it easy for people to go to the next generation and continue to move forward," Barkley said. "The models that we built 18 months ago are a shadow of the things that we have today. And so, making sure that you have ways for people to upgrade and continue to get that innovation is a big part of some of the things that we had to change."

    Google is also focused on helping customers choose the right models for their particular applications.

    The Model Garden offers more than 100 closed and open models.

    "One thing that our most sophisticated customers are struggling with is how to evaluate models," Barkley said.

    To help customers choose, Google recently introduced some evaluation tools that allow users to put in a prompt and compare the way models respond.

    The vendor is also working on AI reasoning techniques and sees that as moving the generative AI market forward.

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    46 mins
  • AT&T's David C. Williams on how generative AI will force diversity in AI systems
    Sep 3 2024

    The growth of generative AI has put diversity front and center.

    In the last year, there have been concerns that GenAI systems such as ChatGPT and Google Gemini are not trained with enough diverse data sets.

    For instance, the introduction of the Lensa app two years ago allowed people of color to generate avatars of themselves. Concerns were raised, however, after some users said Lensa's generated images changed their skin color.

    Incidents with AI tools like Lensa show that AI creators might not have enough diversity in their data set.

    Alternatively, there have also been incidents where it's clear that AI systems misrepresented diversity. For example, Google shut down Gemini's image generator earlier this year after users started generating inaccurate depictions of historical figures. For example, it generated images of well-known white people, such as the Pope, as Black people.

    Google has since opened the model back up. Last week, the cloud provider revealed that its new AI model, Imagen 3, will be rolled out to its Gemini AI model. The model will produce images of people again but won't support generation of photorealistic identifiable individuals.

    Despite the hiccup in the beginning stages of the technology, hope exists, said David C. Williams, assistant vice president of automation at AT&T.

    While Williams leads a team that previously used RPA, or robotics process automation, to drive business needs at AT&T, the team is now pivoting to generative AI. The shift has given Williams a view of how GenAI could affect diversity.

    "Generative AI is going to force diversity," Williams said on the latest Targeting AI episode.

    Cloud providers such as Google must include diversity in their data sets because not having it could lead to alienation from people of color, he continued. If creators of these systems fail to have diverse systems that show representation, that could lead many people of color to simply stop using the systems, which won't help their business.

    On the other hand, people of color and women will gain new opportunities because of generative AI.

    "Those that embrace generative AI and figure out how to use it in the workplace will have an incredibly different value proposition than the rest," Williams said.

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    28 mins
  • Generative AI fuels growth of online deepfakes threatening organizations and election integrity
    Aug 19 2024

    The growth of deepfakes in the past few years is a threat to not only organizations but also the U.S. general election in November.

    Information security vendor Pindrop saw a sharp rise in deepfakes in the first few months of the year compared to the previous year.

    Deepfakes of Vice President Kamala Harris, former President Donald Trump, President Joe Biden and state-level candidates have circulated in the runup to the November U.S. general election.

    "Last year, we were seeing about one deepfake every single month," Vijay Balasubramaniyan, co-founder and CEO at Pindrop, said on the Targeting AI podcast. "Starting this year ... we started seeing a deepfake every single day across every single customer."

    A big reason for the stark increase is the growth of generative AI systems and voice cloning apps. Meanwhile, many people can't distinguish between a deepfake voice and an authentic one.

    While about 120 voice cloning apps were on the market last year, this year users (both legitimate and illegitimate) can choose among more than 350 voice cloning apps.

    Moreover, Balasubramaniyan said, fraudsters are using generative AI technology to scale their attacks.

    For example, generative AI systems can create deepfakes in many different languages -- a series of large language models from Meta can translate some 4,000 languages. Fraudsters can use these systems to create deepfakes that can respond to questions depending on which words are spoken.

    "They have managed to scale their attacks in massive ways, and in ways that we have not seen before generative AI. We're seeing that now," Balasubramaniyan said.

    The massive progression of deepfake technology means organizations must remain aware and vigilant, said Harman Kaur, vice president of AI at Tanium, on the podcast. Tanium is a cybersecurity and management vendor based in Kirkland, Wash.

    "You have to have a plan to respond," Kaur said. "Do you have the tools to understand what type of threat has been invited into your network, and do you have the tools to fix it?"

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    42 mins
  • Examining the tech stances of Kamala Harris and Donald Trump, plus J.D. Vance
    Aug 5 2024

    Democratic presidential candidate Kamala Harris is a product of two decades of California politics who has longstanding ties to the tech and AI communities in her home state.

    But in her role as President Joe Biden's vice president during the past four years, Harris was tasked with overseeing Biden's executive order on AI, with its emphasis on government regulation. And it was she who hosted leaders of tech giants at the White House last year and secured pledges from them to focus on AI safety.

    In sharp contrast is the GOP presidential nominee, Donald Trump.

    While Trump's running mate, Senator J.D. Vance (R-Ohio), has a background in tech venture capital, Trump himself has no tech experience but backs a largely hands-off approach to tech and AI companies.

    In simple terms, Trump is anti-regulation, while Harris favors a moderate regulatory stance on big tech and the suddenly emergent generative AI sector, a view that roughly parallels that of Biden.

    In this episode of the Targeting AI podcast from TechTarget Editorial, three commentators on the confluence of tech and AI and politics registered their analyses of the complex dynamics of the likely Harris-Trump faceoff.

    Makenzie Holland, big tech and federal regulation senior news writer at TechTarget, emphasized that "there is a huge focus from the Biden-Harris administration on AI safety and trustworthiness."

    Meanwhile, "we've obviously seen Trump attack the executive order," she noted.

    For R "Ray" Wang, founder and CEO of Constellation Research, the choice for the tech industry is fairly clear.

    "I stress the libertarian view because I think that's important to understand that tech doesn't necessarily want to be governed," Wang said.

    The other guest on the podcast, Darrell West, a senior fellow in the Governance Studies program at the Brookings Institute, has authored a book about policy making in the AI era. He also pointed out the marked divergence of Harris and Trump on tech and AI issues.

    "Even though she historically has been close to the tech sector, I actually think she will maintain Biden's tough line on a lot of issues because that's where the party is these days," West said. "And also that's where public opinion is on many tech issues."

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

    Show more Show less
    1 hr and 6 mins
  • A year in review with the Targeting AI podcast
    Jul 29 2024

    For the past year, the Targeting AI podcast has explored a broad range of AI topics, none more than the fast-evolving and sometimes startling world of generative AI technology.

    From the first guest, Michael Bennett, AI policy adviser at Northeastern University, the podcast has focused intently on the popularization of generative AI, while also touching on traditional AI.

    While that first episode centered on the prospects of AI regulation, Bennett also spoke about some of the controversies then emerging in the nascent stages of generative AI.

    "Organizations who have licenses to use and to sell photographers' works are pushing back,” Bennett said during the inaugural episode of the Targeting AI podcast.

    While Bennett's point of view illuminated the regulatory and ethical dimensions of the explosively growing technology, Michael Stewart, a partner at Microsoft's venture firm M12, discussed the startup landscape.

    With the rise of foundation model providers such as Anthropic, Cohere and OpenAI, generative AI startups for the last 12 months chose to partner with and be subsidized by cloud giants -- namely Microsoft, Google and AWS –-- instead of seeking to be acquired.

    "This is a very ripe environment for startups that have a partnership mindset to work with the main tech companies,” Stewart said during the popular episode, which was downloaded more 1,000 times.

    The early stages of generative AI were marked by accusations of data misuse, particularly from artists, writers and authors.

    Our Targeting AI podcast hosts have also spoken to guests about data ownership and how large language models are affecting industries such as the music business.

    The podcast also explored new regulatory frameworks like President Joe Biden's executive order on AI.

    With some 27 guests from a diverse group of vendors and other organizations, the podcast took shape and laid the groundwork for a second year with plenty of new developments to explore.

    Coming up soon are episodes on Democratic presidential candidate Kamala Harris’ stances on AI and big tech antitrust actions, election deepfakes and tech giant Oracle's foray into generative AI.

    Listen to Targeting AI on Apple Podcasts, Spotify and all major podcast platforms, plus on TechTarget Editorial’s enterprise AI site.

    Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.

    Show more Show less
    44 mins
  • AWS GenAI strategy based on multimodel ecosystem, plus Titan, Q and Bedrock
    Jul 15 2024

    AWS is quietly building a generative AI ecosystem in which its customers can use many large language models from different vendors, or choose to employ the tech giant's own models, Q personal assistants, GenAI platforms and Trainium and Inferentia AI chips.

    AWS says it has more than130,000 partners, and hundreds of thousands of AWS customers use AWS AI and machine learning services.

    The tech giant provides not only the GenAI tools, but also the cloud infrastructure that undergirds GenAI deployment in enterprises.

    "We believe that there's no one model that's going to meet all the customer use cases," said Rohan Karmarkar, managing director of partner solutions architecture at AWS, on the Targeting AI podcast from TechTarget Editorial. "And if the customers want to really unlock the value, they might use different models or a combination of different models for the same use case."

    Customers find and deploy the LLMs on Amazon Bedrock, the tech giant's GenAI platform. The models are from leading GenAI vendors such as Anthropic, AI21 Labs, Cohere, Meta, Mistral and Stability AI, and also include models from AWS' Titan line.

    Karmarkar said AWS differentiates itself from its hyperscaler competitors, which all have their own GenAI systems, with an array of tooling needed to implement GenAI applications as well as AI GPUs from AI hardware giant Nvidia and AWS' own custom silicon infrastructure.

    AWS also prides itself on its security technology and GenAI competency system that pre-vets and validates partners' competencies in putting GenAI to work for enterprise applications.

    The tech giant is also agnostic on the question of proprietary versus open source and open models, a big debate in the GenAI world at the moment.

    "There's no one decision criteria. I don't think we are pushing one [model] over another," Karmarkar said. "We're seeing a lot of customers using Anthropic, the Claude 3 model, which has got some of the best performance out there in the industry."

    "It's not an open source model, but we've also seen customers use Mistral and [Meta] Llama, which have much more openness," he added.

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving

    coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

    Show more Show less
    22 mins
  • Walmart uses generative AI for payroll, employee experience
    Jul 1 2024

    The biggest global retailer sees itself as a tech giant.

    And with 25,000 engineers and its own software ecosystem, Walmart isn't waiting to see how GenAI technology will play out.

    The company is already providing its employees -- referred to by the retailer as associates -- with in-house GenAI tools such as the My Assistant conversational chatbot.

    Associates can use the consumer-grade ChatGPT-like tool to frame a press release, write out guiding principles for a project, or for whatever they want to accomplish.

    "What we're finding is as we teach our business partners what is possible, they come up with an endless set of use cases," said David Glick, senior vice president of enterprise business services at Walmart, on the Targeting AI podcast from TechTarget Editorial.

    Another point of emphasis for Walmart and GenAI is associate healthcare insurance claims.

    Walmart built a summarization agent that has reduced the time it takes to process complicated claims from a day or two to an hour or two, Glick said.

    An important area in which Glick is implementing GenAI technology is in payroll.

    "What I consider our most sacrosanct duty is to pay our associates accurately and timely," he said.

    Over the years, humans have monitored payroll. Now GenAI is helping them.

    "We want to scale up AI for anomaly detection so that we're looking at where we see things that might be wrong," Glick said. "And how do we have someone investigate and follow up on that."

    Meanwhile, as for the "build or buy" dilemma, Walmart tends to come down on the build side.

    The company uses a variety of large language models and has built its own machine learning platform, Element, for them to sit atop.

    "The nice thing about that is that we can have a team that's completely focused on what is the best set of LLMs to use," Glick said. "We're looking at every piece of the organization and figuring out how can we support it with generative AI."

    Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

    Show more Show less
    24 mins
  • Lenovo stakes claim to generative AI at the edge
    Jun 17 2024

    While Apple garnered wide attention for its recent embrace of generative AI for iPhones and Macs, rival end point device maker Lenovo already had a similar strategy in place.

    The multinational consumer products vendor, based in China, is known for its ThinkPad line of laptops and for mobile phones made by its Motorola subsidiary.

    But Lenovo also has for a few years been advancing a “pocket to cloud” approach to computing. That strategy now includes GenAI capabilities residing on smartphones, AI PCs and laptops and more powerful cloud processing power in Lenovo data centers and customers’ private clouds.

    Since OpenAI’s ChatGPT large language model (LLM) disrupted the tech world in November 2022, GenAI systems have largely been cloud-based. Queries from edge devices run a GenAI prompt in the cloud, which returns the output to the user’s device.

    Lenovo’s strategy -- somewhat like Apple’s new one -- is to flip that paradigm and locate GenAI processing at the edge, routing outbound prompts to the data center or private cloud when necessary.

    The benefits include security, privacy, personalization and lower latency -- resulting in faster LLM responses and reducing the need for expensive compute, according to Lenovo.

    “Running these workloads at edge, on device, I'm not taking potentially proprietary IP and pushing that up into the cloud and certainly not the public cloud,” said Tom Butler, executive director, worldwide communication commercial portfolio at Lenovo, on the Targeting AI podcast from TechTarget Editorial.

    The edge devices that Lenovo talks about aren’t limited to the ones in your pocket and on your desk. They also include remote cameras and sensors in IoT AI applications such as monitoring manufacturing processes and facility security.

    “You have to process this data where it's created,” said Charles Ferland, vice president, general manager of edge computing at Lenovo, on the podcast. “And that is running on edge devices that are deployed in a gas station, convenience store, hospital, clinics -- wherever you want.”

    Meanwhile, Lenovo in recent months rolled out partnerships with some big players in GenAI including Nvidia and Qualcomm.

    The vendor is also heavily invested in working with neural processing units, or NPUs, in edge devices and innovative cooling systems for AI servers in its data centers.

    Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, analytics and data management technology. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.

    Show more Show less
    43 mins