ePrivacy and GPDR Cookie Consent management by TermsFeed Privacy Policy and Consent Generator
More

    DeepMind tests the limits of large AI language systems with 280-billion-parameter model

    DeepMind tests the limits of large AI language systems with 280-billion-parameter model

    HomeAIDeepMind tests the limits...
    DeepMind tests the limits of large AI language systems with 280-billion-parameter model

    Illustration by Alex Castro / The Verge

    Language generation is the hottest thing in AI right now, with a class of systems known as “large language models” (or LLMs) being used for everything from improving Google’s search engine to creating text-based fantasy games. But these programs also have serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning. One big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm?

    This is one of the topics that Alphabet’s AI lab DeepMind is tackling in a trio of research papers published today. The company’s conclusion is that scaling up these systems further should deliver plenty of improvements. “One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued,” DeepMind research scientist Jack Rae told reporters in a briefing call.

    “This is not an area that has plateaued.” DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters).

    It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix.

    “I think right now it really looks like the model can fail in variety of ways,” said Rae. “Some subset of those ways are because the model just doesn’t have sufficiently good comprehension of what it’s reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale.”

    Not all problems with AI language systems can be solved with scale. But, he added, there are “other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths, that […] no one at DeepMind thinks scale will be the solution [to].” In these cases, language models will need “additional training routines” like feedback from human users, he noted.

    To come to these conclusions, DeepMind’s researchers evaluated a range of different-sized language models on 152 language tasks or benchmarks. They found that larger models generally delivered improved results, with Gopher itself offering state-of-the-art performance on roughly 80 percent of the tests selected by the scientists.

    In another paper, the company also surveyed the wide range of potential harms involved with deploying LLMs. These include the systems’ use of toxic language, their capacity to share misinformation, and their potential to be used for malicious purposes, like sharing spam or propaganda. All these issues will become increasingly important as AI language models become more widely deployed — as chatbots and sales agents, for example.

    However, it’s worth remembering that performance on benchmarks is not the be-all and end-all in evaluating machine learning systems. In a recent paper, a number of AI researchers (including two from Google) explored the limitations of benchmarks, noting that these datasets will always be limited in scope and unable to match the complexity of the real world. As is often the case with new technology, the only reliable way to test these systems is to see how they perform in reality. With large language models, we will be seeing more of these applications very soon.

    Title: DeepMind tests the limits of large AI language systems with 280-billion-parameter model
    Sourced From: www.theverge.com/2021/12/8/22822199/large-language-models-ai-deepmind-scaling-gopher
    Published Date: Wed, 08 Dec 2021 16:00:00 +0000

    Experts.Guys.
    Experts.Guys.
    Experts.Guys. is the general account for the expertsguys.com. We share news and updates, if you have any question, you can email us!

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    More from this article

    The Emerging Field of Artificial General Intelligence

    There is an emerging field of artificial general intelligence,...

    Artificial Intelligence in Automobiles

    AI in cars has many potential benefits. The ability...

    The Importance of Artificial Intelligence in the...

    AI for government will help speed up existing operations...

    The Shocking Approach The Threat of Artificial...

    How Artificial Intelligence will change our Lives Forever Artificial intelligence...

    Read Now

    Influence Statement of AI on Existing Research To start, the…

    Influence Statement of AI on Existing Research To start, the influence declaration of AI on existing study needs to take right into account where the advantages will certainly...

    Finding an Ideal Artificial Intelligence Instruction Institute in Noida

    If you want to learn Artificial Intelligence, but you are not sure which Institute to join, here are a few tips for you. Listed below are three...

    The Human Brain is a Genius, But Why Do We Need to Develop AI?

    Human brains are not perfect, but they do make the best artificial intelligence. They are a genius in their own right, and that's why we should keep...

    Read Now

    Influence Statement of AI on Existing Research To start, the…

    Influence Statement of AI on Existing Research To start, the influence declaration of AI on existing study needs to take right into account where the advantages will certainly...

    Finding an Ideal Artificial Intelligence Instruction Institute in Noida

    If you want to learn Artificial Intelligence, but you are not sure which Institute to join, here are a few tips for you. Listed below are three...

    The Human Brain is a Genius, But Why Do We Need to Develop AI?

    Human brains are not perfect, but they do make the best artificial intelligence. They are a genius in their own right, and that's why we should keep...