Machine unlearning.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      But Cao's particular method only worked for models that were far simpler than the LLMs behind today's AI chatbots. It gets worse, though, because Al-powered chatbots are also vulnerable to attacks in which information is concealed in the training data to trick the model into behaving in unintended ways. Such generalisation tends to undercut some of the statistical learning prowess that makes AI chatbots so powerful. [Extracted from the article]
    • Abstract:
      Copyright of New Scientist is the property of New Scientist Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)