Google announces PaLM 2, its answer to GPT-4 – Ars Technica

Google

Google made on Wednesday PaLM2, which is a family of basic language models comparable to OpenAI’s GPT-4 model. At the Google I/O event in Mountain View, California, Google revealed that it already uses PaLM 2 to power 25 products, including the conversational AI assistant Bard.

As a family of large language models (LLMs), PaLM 2 has been trained on a massive volume of data and performs next-word prediction, which produces the most likely text after quick input by humans. PaLM stands for Pathways Language Model andarcadeis a machine learning technology created at Google. PaLM 2 tracks a file The original PaLMAnnounced by Google in April 2022.

According to Google, PaLM 2 supports more than 100 languages ​​and can perform “logical reasoning”, code generation, and multilingual translation. During his keynote at Google I/O 2023, Google CEO Sundar Pichai said the PaLM 2 comes in four sizes: Gecko, Otter, Bison, and Unicorn. Gecko is the smallest and can be played on a mobile device. Apart from the Bard, PaLM 2 is behind AI features in documents, spreadsheets, and presentations.

Example provided by Google of PaLM 2
Zoom in / Example provided by Google of “thinking” in PaLM 2.

Google

This is all well and good, but how does PaLM 2 stack up to GPT-4? In the PaLM Technical Report 2PaLM2 It seems To beat GPT-4 in some mathematical, translation, and inference tasks. But the reality may not match Google standards. In a quick evaluation of the PaLM 2 version of the Bard by Ethan Mollick, a Wharton professor who often writes about artificial intelligence, Mollick found that PaLM 2 performs worse than GPT-4 and Bing on several informal language tests, which is Hinge in the Twitter thread.

See also  How to Pre-Order Hogwarts Legacy - Including the Expensive Collector's Edition

Until recently, the PaLM family of language models was an internal product of Google Research with no exposure to consumers, but Google has started Provide limited access to the API in March. However, the first PaLM was notable for its huge size: 540 billion parameters. Parameters are numerical variables that serve as the model’s acquired “knowledge”, enabling it to make predictions and generate script based on the input it receives.

Example provided by Google of a translated PaLM 2 language.
Zoom in / Example provided by Google of a translated PaLM 2 language.

Google

More parameters roughly means more complexity, but there is no guarantee that they will be used efficiently. In comparison, OpenAI’s GPT-3 (as of 2020) has 175 billion parameters. OpenAI has never disclosed the number of parameters in GPT-4.

And this leads to the big question: How “big” is PaLM 2 in terms of number of parameters? Google not sayWhich frustrated some Industry experts who often campaign for more transparency into what makes AI models tick.

This isn’t the only PaLM 2 feature that Google hasn’t overlooked. company He says says that PaLM 2 has been trained on “a variety of sources: web documents, books, code, math, and conversation data,” but doesn’t go into detail on what exactly that data is.

As with other large language model datasets, the PaLM 2 dataset is likely to include a wide variety of Copyrighted material Used without permission and materials that may be harmful scrape off Internet. Training data critically affects the output of any AI model, so some experts have advocated the use of open datasets It can provide opportunities for scientific reproduction and ethical scrutiny.

See also  The astrophysicist claims to have finally figured it out
Example code provided by Google for writing a PaLM 2 program.
Zoom in / Example code provided by Google for writing a PaLM 2 program.

Google

“Now that LLM is products (and not just research), we are at a tipping point: for-profit companies will become *specifically* less transparent about what components matter most,” chirp Jesse Dodge, research scientist at Allen Institute for Artificial Intelligence. “Only if the open source community can organize together can we keep up!”

So far, criticism of hiding the secret sauce hasn’t stopped Google from pursuing widespread deployment of AI models, despite the tendency in all LLMs to make things up out of thin air. During the Google I/O conference, company representatives demoed AI features in several of its flagship products, which means that a broad cross-section of the public could soon be battling AI issues.

And in terms of LLMs, PaLM 2 is far from the end of the story: At the I/O keynote, Pichai mentioned that a new multimedia AI model called “Gemini” was currently in training. As the race for AI dominance continues, Google users in the United States and 180 other countries (the stranger except Canada and mainland Europe) can Try PaLM 2 themselves As part of Google Bard, the experimental AI assistant.

Leave a Reply

Your email address will not be published. Required fields are marked *