The Risks of Trusting AI.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      This article discusses the risks associated with trusting artificial intelligence (AI) in scientific research. While AI models are increasingly being used in various scientific fields, such as bioengineering, veterinary medicine, and climatology, there are concerns about their limitations and potential biases. The authors argue that humans tend to attribute too much authority and trustworthiness to AI systems, which can lead to problems in research. They highlight the risks of relying on AI tools, such as the potential for cognitive illusions and the loss of diversity in human perspectives. The authors suggest strategies for mitigating these risks, including diversifying research approaches and being transparent about AI funding. They emphasize the importance of human involvement in scientific knowledge production and caution against assuming that AI automatically leads to better science. [Extracted from the article]
    • Abstract:
      Copyright of Scientific American is the property of Scientific American and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)