Can Google parry generative AI’s existential threat?

Google’s search algorithm is no longer the only one that matters

When Sundar Pichai took the stage at the Google I/O developer conference in May, anticipation was high.

After several missteps by the company that shaped Internet search – including its first revenue decline in years and a disastrous February AI product demonstration that wiped 9 per cent from the company’s valuation – Pichai had to prove that Google hasn’t been fatally broadsided by the explosive growth of generative AI, spearheaded by OpenAI’s massively successful ChatGPT.

Pichai and his teams of thousands of software engineers had watched in recent months, mustering their efforts around a companywide ‘code red’ as more than 100 million people watched the world react to what has arguably become the biggest competitive threat to Google in years.

That’s because the ChatGPT large language model (LLM) can use the underlying GPT 3.5 generative AI engine to sidestep Google’s core search – following simple or complex instructions to provide written summaries of facts and issues, product comparisons, personalised document templates, business documents, and more.

The prospect of putting an AI-driven personal assistant at the fingertips of every Internet user has electrified the productivity of software engineers, who have rapidly tapped OpenAI’s bank of application programming interfaces (APIs) to extend their own products with generative AI capabilities that can, for example, summarise documents or create images with a few clicks.

Early experiments with generative AI produced mixed results, such as the Tay chatbot that Microsoft had to decommission in 2016 after its amplification of right-wing online content led it to develop a racist, sexist personality.

Earlier this year, Google’s own research project – the Bard LLM, which was built on Google’s LaMDA generative AI engine – fell off the rails after a promotional tweet teasing its capabilities was proved to contain an easily refutable factual error.

That single error hit Alphabet’s capitalisation to the tune of $150 billion ($US100 billion) almost overnight – showing both how important the market now perceives generative AI to be, and how important it was that Pichai got it right the second time around.

The company’s decision to release Bard as a reactive move to ChatGPT’s success “was a mistake,” said Dr Toby Walsh, chief scientist with the University of NSW’s AI Institute, who told create that it is “worrying today to see companies jumping into this and throwing caution to the wind.”

“Google are now licking their wounds from the mistake of doing that.”

Pichai’s I/O demonstration, therefore, needed to show not only that Google has a long-term vision, but that it can preserve its wildly profitable commercial search business while pivoting it into a new era where generative AI is changing the way we expect information to be presented to us.

YouTube player
Watch Google’s Deep Dive Into Bard AI Chatbot

PaLM 2: Search Generative Experience

To be rolled out starting this year, the company’s new Search Generative Experience (SGE) – which uses a different LLM called PaLM 2 capable of what the company calls “inference-time control over toxicity” of its content – will build generative AI outputs directly into Google’s workplace.

PaLM 2 – which has also been repurposed into Med-PaLM 2 and Sec-PaLM 2 (medical and security-focused variants) – will enhance more than 25 products and features across Google’s core products with features such as drafting contextually relevant emails in Gmail, automatic translation between over 100 languages in Workspace, writing and debugging application code or generating presentations.

The model will also support a broad range of third-party applications, with firms like Canva, Deutsche Bank, Oxbotica, and Uber already showing off ways generative AI will improve their offerings.

Leading companies build with generative AI on Google Cloud

Google sees PaLM 2 – and its upcoming and still-in-production successor, Gemini – as particularly critical in helping it retain the billions of users that have kept its Google Search business as the company’s cash cow for years.

This might, for example, include summaries of background information relevant to user searches, or automatically generated product comparisons to help users sift through information more quickly – all integrated directly into Google’s user interface

“The reason we began deeply investing in AI many years ago is because we saw the opportunity to make Search better,” Pichai said during his I/O keynote. “And with each breakthrough, we’ve made it more helpful and intuitive.”

“As we look ahead, Google’s deep understanding of information combined with the unique capabilities of generative AI can transform how Search works yet again, unlocking entirely new questions that Search can answer, and creating increasingly helpful experiences that connect you to the richness of the web.”

Can search find itself again

Yet behind the impressive technology demonstrations is the sense that Google’s generative AI push is now a do-or-die effort – and a core part of that effort will be improving Google Search and its underlying algorithms without overcluttering the experience.

It’s an experience that many users feel has already become too burdened by conflicting agendas and baked-in complexity that has compromised the utility of Google Search.

That utility has, observers such as tech analyst Isaiah McCall have noticed, declined over time as Google’s top-secret algorithm intersperses useful search results with paid content, SEO-focused dross, and irrelevant links that often come up in different order for different users.

Google Search “has thrown specificity completely out the window,” says McCall, complaining that “search engines have stopped behaving like databases and are giving suggestions instead of results because an algorithm believes it knows your intentions and objectives better than you.”

Just how that algorithm works remains a closely guarded secret – with reports suggesting that Google Search factors in more than 200 variables as it combs its ever-changing archive of content for links to recommend.

The Australian Search Experience, a year-long research project run by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) with the support of search transparency body AlgorithmWatch, worked to unravel Google’s personalisation algorithm by enlisting over 1000 citizen scientists to install a browser extension that collected over 350 million search results, then measured the variability between the outputs that Google provides different users entering the same searches.

“Search engine personalisation may be influencing individuals’ search results, and thereby shape what they know of the world,” the project’s website notes. “This may affect their personal decisions, and our collective decisions as a society – from how we spend our money or who we vote for to our attitudes on critical issues such as the safety of COVID-19 vaccines.”

Analysis of the collected data is ongoing, but the project team reports that early analyses showed “limited evidence of search personalisation” that is “largely driven by user location” and “critical search topics” that seem to be “manually curated”.

Current projects include analysing news search results to see which outlets are favoured more than others, and exploring the way Google services “operationalise ‘authoritativeness’ across socio-cultural issues and over time.”

ChatGPT – Is it hype or the next big thing? 

A clean sweep of search

Yet personalisation and advertising are only two of the issues levelled at Google Search: harmful content remains another, with Google’s newly released Annual Transparency Report showing that it removed over 19 million YouTube videos globally during 2022 for violating community guidelines – including over 80,000 Australian videos.

Reflecting its concern that Google Search can be a conduit that sends users to misinformation content, the company also removed more than 2000 Australian-uploaded videos said to violate its misinformation policies and over 3000 containing dangerous or misleading COVID-19 information.

Critics may argue that this activity amounts to censorship – an issue that Twitter owner Elon Musk, in particular, has railed against – but information retrieval expert and search engine researcher Professor Mark Sanderson, Dean for Research & Innovation, Engineering & Technology within the RMIT University STEM College, believes that such a change was inevitable as Google matured from its earliest days.

“Maybe 20 years ago, the people in charge of the algorithm were more of the mindset that they were going to reflect the underlying volume of content on the Internet,” he explained.

“But they’ve changed their mind and over a period of time have become more about what are the things that you do as a search engine operating in a society that wants people to feel OK doing certain things.”

“There probably has been a move to a more interventionist approach,” he said, noting that this change of philosophy has also driven “greater concern about the prominence of advertising” in Google’s approach.

“Google has done a better job of stopping people getting to unreliable news sites, which probably means it is curating search results a lot more than it used to.”

Yet Sanderson believes the current success of generative AI comes in part because meaningful curation of content has helped it avoid the toxicity of early projects like Tay.

“Generative methods have been quietly developing over time,” he explains, “but we haven’t noticed because whenever someone made one of these systems publicly available they would be made to be very toxic.”

“Because OpenAI by and large figured out how to block this toxicity, this very effective piece of technology was launched. It’s incredible the way that it can customise those answers for you – so the questions is, will those customised answers show up in other aspects of search?”

“Having those 10 blue links on the search result page was never the end goal – and these technologies are creeping in.”

Charting the future of search

Toxicity isn’t the only problem facing the AI engineers who continue to refine Google’s search algorithms, and the way they integrate with an increasingly important generative AI capability.

Years of AI research primed Google to respond to ChatGPT’s threat, with its better-formed I/O announcements laying down a technological framework by which Search and its many other properties will be complemented by new tools for summarising information in contextually appropriate ways.

Keeping users inside the Google productivity ecosystem remains fundamental to Google’s economics, which are based on presenting relevant advertising at appropriate points

Also important will be efforts to reduce generative AI ‘hallucinations’, in which the tools embed invented facts in their outputs because of some inference drawn between incorrect facts buried deep in their analytical algorithms.

Restricting the breadth of content used to train LLMs can reduce the chance of hallucinations – Google says Med-PaLM 2 is nine times less likely to invent facts than the general PaLM model – and universal usage of the tools will depend on engineers’ success in helping generative AI models to question themselves and iteratively shake inaccuracies out of their outputs.

This is where AI has become the fly in Google’s ointment: because interaction with ChatGPT and its contemporaries feels so human, we are prone to trust it more than we should, no matter how many warning labels are put on the proverbial box – so when we discover that a generative AI has been feeding us tall tales or simply inventing facts, the implications for that trust are significant.

One recent University of Queensland-KPMG study explored the notion of trust in AI, finding that around 44 per cent of respondents said they are willing to rely on the output of AI – and that 54 per cent still have reservations.

Framed in terms of three key attributes – ability, humanity, and integrity – the study found that out of 17 countries surveyed, Australians were the least likely overall to express trust in AI’s ability to produce output that has a positive impact on humanity.

Far more significant was the difference between attitudes in Western countries – Australia, Finland, Germany, the UK, and US, among others – and the population centres of India and China, where faith in AI was sky-high in all three categories.

Given their massive populations and growing role as both users and developers of AI, those populations’ faith in the technology could rapidly become a major challenge for Google, which years ago handed China’s search engine market to domestic giant Baidu after it couldn’t stomach Chinese government interference.

Artificial Intelligence: A Matter of Trust (KPMG)

With a range of ChatGPT-like features emerging this year, Baidu’s ‘Ernie’ AI – which reportedly has more than 120,000 companies eager to test its information processing and generative AI capabilities – typifies the kind of threat that innovative AI-based search firms will pose to Google’s search dominance.

Google’s Pichai, writing as Bard was introduced to the world, highlighted both the company’s grasp of the criticality of generative AI – which he called “the most profound technology we are working on today” – and his awareness that getting it wrong could be fatal for a company built on innovation.

AI, he wrote, “is the most important way we can deliver on our mission: to organise the world’s information and make it universally accessible and useful.”

Exit mobile version