Search

Google I/O 2023: PaLM 2 Debut Shows Language Model Progress Although Toxicity, Economic and Environmental Concerns Abound

The News: Google introduced PaLM2, the company’s next generation language model, at Google I/O 2023. PaLM 2 is a language model that Google promotes as having improved multilingual, reasoning, and coding capabilities over PaLM 1. Read the blog from Google here.

Google I/O 2023: PaLM 2 Debut Shows Language Model Progress Although Toxicity, Economic and Environmental Concerns Abound

Analyst Take: Google is emphasizing that it has added new multilinguality, reasoning, and coding advances aimed at making PaLM 2 more capable, faster, and efficient than previous models, playing a major role in the complete Google I/O 2023 marketing push. PaLM 2 also comes in a diverse array of sizes, targeted at making it easier to deploy for a wide range of use cases.

Now PaLM 2 is more trained on multilingual text, spanning more than 100 languages to boost understanding, generation, and translation of nuanced text such as idioms, poems, and riddles. Plus, PaLM 2’s dataset includes scientific papers and web pages that use mathematical expressions to improve logic, common sense reasoning, and mathematics. For coding, PaLM 2 was pre-trained on a large quantity of publicly available source code datasets, including programming languages such as JavaScript and Python as well as generate specialized code in languages like Prolog and Fortran.

From my view, Google needed to unveil PaLM 2 enhancements to counter Microsoft-backed OpenAI’s GPT-4 language model offering as Microsoft continues to ride the sales and marketing momentum gained from its AI-powered Bing and Edge debut in February. This includes Google enhancing its Language Model for Dialog Applications (LaMDA) so that Google Bard, which uses AI to generate more conversational, contextual, and informative web search results for users, can improve web search by drawing on information across the Internet to provide deeper, mode contextual query results for users.

Specifically, Google heralded over 25 new products and features powered by PaLM 2 including expanding Bard to support new languages. Users can now use Workspace to write in Gmail and Google Docs, as well as organize Google Sheets. I find encouraging that Sec-PaLM 2, a specialized version of PaLM 2, is trained on security use cases and can provide potentially invaluable breakthroughs in cybersecurity analysis.

Refreshingly, Google coupled the PaLM 2 launch with a research paper that revealed and underlined some of the notable limitations of the model. For instance, the paper did not crystallize the data sources used to train PaLM 2, beyond broad categories like web documents, mathematics, conversational data, and books.

However, Google does stress that the PaLM 2 dataset pools from a larger percentage of non-English data and a broader dataset than the PaLM 1 dataset. I anticipate that Google and Microsoft will continue to tightly disclose and mask their respective data sources as competition intensifies throughout the generative AI segment as well as the general AI realm.

Moreover, when fed overtly toxic prompts, such as violent and pornographic content, PaLM 2 generated toxic responses over 30% of the time and proved even more toxic when fed implicitly harmful prompts with a 60% response rate.

While PaLM 2 showed improvement over PaLM 1 in areas such as joke explanation, support for a wider range of language and dialect conversion, and mathematical aptitude, I find that large language models still need adult supervision and a good deal of augmentation before becoming consistently trusted sources of knowledge.

I also believe Google needs to directly address the economic and environmental dimensions of LLM and generative AI technology. For example, the daily cost of running ChatGPT is apparently a staggering $700K (according to The Information’s findings). While access to a fully itemized breakdown of the $700K daily bill is not readily available, it’s not difficult to deduce that a substantial portion is spent on energy due to the high-powered servers, GPUs, and massive storage capacities used in AI applications.

Key Takeaways: Google PaLM 2 Shows Language Model Training is a Long and Winding Road

Overall, I expect that Google DeepMind research, buttressed by Google’s vast computational resources, can frequently deliver new capabilities that improve the experience of using Google products. I also believe that Google PaLM 2, while showing improvements in key areas, remains very much a work in progress due to considerations such as alarmingly high toxicity rates as well as the economic and environmental implications of scaling language models and generative AI.

Google needs to rapidly address the full range of concerns to assure sustained PaLM 2 progress or risk economic and environmental considerations acting as major constraints on the longer-term ecosystem impact of PaLM 2 and language models in general.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Alphabet Announces Q1 FY23 Results: Search and Cloud Lift Performance as Google Preps for AI Battles

The Battle for AI Domination Continues after Latest Google Announcement

Google Invests $300mn in Artificial Intelligence Start-Up Anthropic, Taking on ChatGPT

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Bedrock’s New Enhancements Are Designed to Streamline the Development of Advanced Generative AI Applications
Steven Dickens, Vice President and Practice Lead at The Futurum Group, provides his insights into the announcements from AWS on the enhancements to Amazon Bedrock, including Custom Model Import, Model Evaluation, and advanced Guardrails.
In a discussion that spans significant financial movements and strategic acquisitions to innovative product launches in cybersecurity, hosts Camberley Bates, Krista Macomber, and Steven Dickens share their insights on the current dynamics and future prospects of the industry.
The New ThinkCentre Desktops Are Powered by AMD Ryzen PRO 8000 Series Desktop Processors
Olivier Blanchard, Research Director at The Futurum Group, shares his insights about Lenovo’s decision to lean into on-device AI’s system improvement value proposition for the enterprise.
Steven Dickens, Vice President and Practice Lead, at The Futurum Group, provides his insights into IBM’s earnings and how the announcement of the HashiCorp acquisition is playing into continued growth for the company.