Illustration by Gracelynn Wan for Forbes

In 2016, a few months after becoming CEO of Google, Sundar Pichai made a sweeping proclamation: Google, whose name had become synonymous with search, would now be an “AI-first” company. Announced at Google’s massive I/O developer conference, it was his first major order of business after taking the company reins.

What AI-first meant, exactly, was murky, but the stakes were not. Two years earlier, Amazon had blindsided Google by releasing its voice assistant Alexa. Now a household name, it was a coup that particularly aggrieved Google. “Organizing the world’s information” had long been the company’s mission, and a service like that should have been the company’s birthright. At the conference, Google was releasing a competitor, simply coined the Assistant, and as part of the launch, Pichai was reorienting the company around helpful AI.

Seven years later, Google finds itself in a similar position, again beaten to market in a field it should have dominated. But this time it’s worse: The usurper is OpenAI , a comparatively small San Francisco startup, and not a deep-pocketed giant like Amazon. The product is ChatGPT, a bot that can generate sitcom plots, resignation letters, lines of code, and other text on almost any subject conceivable as if written by a human—and it was built using a technological breakthrough Google itself had pioneered years ago. The bot, released in November, has captured the public’s imagination, despite Google announcing a similar technology called LaMDA two years ago.

What’s worse, Google’s chief search engine rival, Microsoft, is nourishing OpenAI with $10 billion and on Tuesday announced a new version of Bing with AI chat features even more advanced than ChatGPT—a potentially existential move for the future of internet search. During his keynote, Microsoft CEO Satya Nadella proclaimed a “new day” for search. “The race starts today,” he said. “We’re going to move fast and for us, every day we want to bring out new things.” The announcement came a day after Google seemingly rushed to release Bard , its own chatbot using a “much smaller” version of LaMDA technology, with limited availability. The company promised a wide release in “coming weeks.”

As many of its rivals expected, the $1.3 trillion “elephant in the room” has woken up. After Pichai declared the situation a “code red, ” he enlisted cofounders Larry Page and Sergey Brin to review the company’s AI strategy. Brin, at least, has recently become so involved that he filed his first code review request in years, as first reported last week by Forbes.

Asked for comment, a Google spokesperson directed Forbes to the blog post published by Pichai on Monday announcing Bard and other AI updates.

“The pirates have their boats in the ocean, and we are coming.”


But while the upstarts have a healthy respect for Google, they no longer fear it, despite its $280 billion in annual revenue and army of researchers and engineers. Google alums lead challengers like Adept, Cohere and Character.ai, feel safe working with it as cloud customers for their models, and—in the case of Anthropic —have even welcomed it onto their cap tables through large investment checks. Said one former Google employee who left the company to found their own AI startup: “The pirates have their boats in the ocean, and we are coming.”

Google didn’t set out to be the vulnerable freight tanker in these uncharted waters. But a fraught history in AI and big innovations, including scandals around its AI ethics research, major backlash after the launch of a freakishly human-sounding AI called Duplex, and a persistent brain drain of AI talent has left it lurching to play catchup.

In the balance is Google’s famous search engine, with its sparse white homepage, one of the most iconic pieces of real estate on the internet. Altering it drastically could affect the advertising revenues (at least in the short term) that have made the company one of the most valuable of all time. But to take back its AI mantle, Google may have to change the very nature of what it means to ‘google’ something.


‘I’m sure there’s PTSD’

Five years ago, Google had what could be considered a coming out party of sorts for the company’s artificial intelligence ambitions. That year at I/O, Pichai unveiled Duplex, a stunningly human-sounding AI service that could book restaurant reservations for users. The AI was programmed to sound like a person by mimicking verbal tics like “um” and “uh,” taking long pauses and modulating its voice. The goal was for the machine to book appointments automatically, even if the business didn’t have a digital booking system like OpenTable. The AI would step in to robocall restaurants when reservations couldn’t be made online.

It was an impressive showing and many were legitimately awestruck. But they were also a bit disturbed and unsettled. They were confused about whether or not the AI would identify itself as a robot. News outlets around the world debated the ethics of a machine intentionally deceiving humans.

This was hardly the first time a high profile Google announcement had inspired immediate public backlash. In 2012, its Google Glass smart glasses debuted to widespread scorn and pushed “Glasshole” into the public vernacular, thanks to a widely reported bar fight and photos like this one which inspired a site called “White Men Wearing Google Glass.”

But the Duplex debacle stung. It was a marquee launch at a marquee event intended to really showcase the audacious direction Pichai intended to chart for the company. Instead it became a monument to Silicon Valley’s gee-whiz cluelessness: cool technology tethered to a lack of human foresight. The New York Times called it “somewhat creepy.” Zeynep Tufecki, the sociologist and writer, was more pointed : “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying,” she tweeted. “Silicon Valley is ethically lost, rudderless and has not learned a thing.”

The backlash left scars. “I’m sure there’s PTSD,” prominent Silicon Valley PR leader Brooke Hammerling told Forbes. It also reinforced a low grade timidity toward AI releases. Two former Google managers with knowledge of the company’s AI efforts cited the Duplex episode as one of many factors that contributed to an environment in which Google was slow to ship AI products.

“Google was on a path where it could have potentially dominated the kinds of conversations we’re having now with ChatGPT.”


There were also other controversies in the company’s AI division that likely caused the company to move more cautiously. In 2018, Google drew heat from its own employees after signing a deal with the Pentagon to provide technology for Project Maven, an effort to use AI to improve the accuracy of drone strikes. In response, the company declined to renew the contract and very publicly released a set of “AI Principles” intended to ethically guide development of the technology. In 2019, it was lambasted when it emerged that contractors were training the company’s facial recognition software on unhoused people with “darker” skin.

In 2020, the company came under fire again after terminating Timnit Gebru and Margaret Mitchell, the company’s Ethical AI leads, after they had co-authored a paper criticizing biases in AI technology the company used in its search engine. The departures infuriated the research community. Jeff Dean, head of Google Research, later admitted that the company’s AI unit took a “reputational hit” because of the episode.

“It’s very clear that Google was [once] on a path where it could have potentially dominated the kinds of conversations we’re having now with ChatGPT,” Mitchell told Forbes. “The fact that the decisions made earlier were very shortsighted put it in a place now where there’s so much concern about any kind of pushback.”

With the caveat that no one truly knows what AI firepower Google may or may not be sitting on, it’s clear the company is facing a crisis in a landmark partnership between Microsoft, a powerful old foe, and OpenAI, a nimble emerging rival. The deal gives OpenAI integration in Microsoft’s lesser-used search engine and web browser, and more importantly, access to the valuable training data those products generate—a dangerous prospect for an incumbent like Google.

In order to release AI products more quickly, Google has reportedly said it will “recalibrate” the amount of risk it’s willing to take in releasing the technology—a stunning admission for a big tech company so closely scrutinized for the toxic content that crops up on its platforms. OpenAI CEO Sam Altman raised an eyebrow at the strategy in a subtweet last month. “OpenAI will continually decrease the level of risk we are comfortable taking with new models as they get more powerful,” he wrote. “Not the other way around.”


‘Our guys got too lazy’

If it weren’t for Google, ChatGPT might not exist.

In 2017, a cadre of Google researchers wrote a seminal paper on AI, called “Attention Is All You Need,” proposing a new network architecture for analyzing text, called transformers. The invention became foundational to generative AI tech—apps like ChatGPT and its ilk that create new content.

That includes Google’s own large language model, LaMDA. First announced in 2021, the bot generates text to engage in complex conversations. When Google demoed it at I/O that year, the company had LaMDA speak from the perspective of the dwarf planet Pluto and a paper airplane . The technology worked so well that Blake Lemoine, an engineer working on the project, claimed it was sentient and had a soul (Google dismissed the claim, and later Lemoine himself ).

Now all but one of the paper’s eight coauthors have left. Six have started their own companies, and one has joined OpenAI. Aidan Gomez, one of the paper’s authors and CEO of AI rival Cohere, said Google’s environment was too rigid for him. “It is a matter of the freedom to explore inside a huge corporation like Google,” he told Forbes. “You can’t really freely do that product innovation. Fundamentally, the structure does not support it. And so you have to go build it yourself.”

“I don’t want to compete against Google on their core competency.” 


Wesley Chan, who founded Google Analytics and is now a cofounder of FPV Ventures, put it more bluntly. Google’s “code red,” Chan said, was received internally as an admission that “our guys got too lazy.”

Still, Google has scale on its side. As of December, the company had more than 190,000 full-time employees. Even after undergoing its largest round of layoffs in its 25 year history last month—cutting some 12,000 jobs, or 6% of the workforce—the company is still massive. Worth noting: When Pichai announced the cuts, he said he was doing so with an eye toward refocusing on AI. “Eventually if this ever goes big, which is what we’re seeing now, Google will just come in,” Emad Mostaque, CEO of Stability AI, known for its AI art generator Stable Diffusion, told Forbes. “I don’t want to compete against Google on their core competency.”


‘Being shaken up’

In 2004, Google surpassed Yahoo in market cap— just two months after its $23 billion IPO. Its ascension and Yahoo’s decline was widely viewed as a case study in the Innovator’s Dilemma, a theory that posits big companies often lose their market advantage by favoring widely-used existing products over new and developing technologies that might someday replace them. Almost two decades later, Google seems to be facing a similar scenario.

“It was [Google’s] institutional inertia and the fear of cannibalizing their core business that stopped them,” said Mostaque. “Now this is being shaken up a bit.”

“It was difficult for Google to release lots of their cutting edge models.”


Google has other business reasons to keep its AI work close to the vest. While it remains a major contributor in the open source movement, it’s also a big public company that needs to protect its IP and competitive advantage. “At some point though, it was difficult for Google, understandably, to release lots of their cutting edge models,” said David Ha, a former Google research scientist who Mostaque hired away to lead strategy at Stability AI. “They had their reasons to do that, sure. Some of which may be sound, but fundamentally, my interest is to democratize these models.”

In addition to Bard, Google said this week that it will also be infusing more AI into its search engine. Google will use the technology to answer complex queries and distill them into one blurb of information. In one example Google cited, the AI conjures up a detailed answer to whether it’s easier to learn the guitar or piano. (ChatGPT can answer the same question, though its response has less specifics.)

Pichai has rejected the idea of Google being hamstrung by the Innovator’s Dilemma. “I see this as a chance to rethink and reimagine and drive search,” Pichai said on an earnings call last week. “You will see us be bold, put things out, get feedback, and iterate and make things better.”

Some venture capitalists think Google is poised to make a big splash. The company has too much institutional history in AI to just roll over, said Lonne Jaffe, managing director at Insight Partners. “This is what they’ve been working on for the last 15 years,” he said. “Just being first isn’t enough. Microsoft knows this better than anybody else,” said Nicolai Wadstrom, founder of BootstrapLabs. “It’s about how you find utility value that can be scalable, and Google is very focused on that.”

Indeed, no one is willing to write off Google, not even its robot competitor. Enter the prompt, “Why did Google fall behind in artificial intelligence development?” into ChatGPT, and here’s what you get:

“Google has not fallen behind in artificial intelligence development. It is still a leading company in the field and continues to invest in and make significant contributions to AI research and development. In recent years, other companies such as OpenAI and Chinese tech companies such as Baidu, Tencent, and Alibaba have emerged as strong players in the AI space, but Google remains a prominent player and leader in the field.”

And ChatGPT is only trained on data through 2021. It doesn’t even know it has a rival in Bard yet.