A Secure GNN Training Framework for Partially Observable Graph.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      Graph Neural Networks (GNNs) are susceptible to adversarial injection attacks, potentially compromising the model integrity, reducing accuracy, and posing security risks. However, most of the current countermeasures focus on enhancing the robustness of GNNs rather than directly addressing these specific attacks. The challenge stems from the difficulty of protecting all nodes in the entire graph and the agnostic of the attackers. Therefore, we propose a secure training strategy for GNNs that counters the vulnerability to adversarial injection attacks and overcomes the obstacle of partial observability in existing defense mechanisms—where defenders are only aware of the graph's post-attack structure and node attributes, without the identification of compromised nodes. Our strategy not only protects specific nodes but also extends security to all nodes in the graph. We model the graph security issues as a Partially Observable Markov Decision Process (POMDP) and use Graph Convolutional Memory (GCM) to transform the observations of a POMDP into states with temporal memory proceeding to use reinforcement learning to solve for the optimal defensive strategy. Finally, we prevent learning from malicious nodes by limiting the convolutional scope, thus defending against adversarial injection attacks. Our defense method is evaluated on five datasets, achieving an accuracy range of 74% to 86.7%, which represents an enhancement of approximately 5.09% to 100.26% over post-attack accuracies. Compared with various traditional experimental models, our method shows an accuracy improvement ranging from 0.82% to 100.26%. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of Electronics (2079-9292) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)