Item request has been placed!
×
Item request cannot be made.
×
Processing Request
A classification paradigm for distributed vertically partitioned data.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Author(s): Basak J;Basak J; Kothari R
- Source:
Neural computation [Neural Comput] 2004 Jul; Vol. 16 (7), pp. 1525-44.
- Publication Type:
Journal Article
- Language:
English
- Additional Information
- Source:
Publisher: MIT Press Country of Publication: United States NLM ID: 9426182 Publication Model: Print Cited Medium: Print ISSN: 0899-7667 (Print) Linking ISSN: 08997667 NLM ISO Abbreviation: Neural Comput Subsets: MEDLINE
- Publication Information:
Original Publication: Cambridge, Mass. : MIT Press, c1989-
- Subject Terms:
- Abstract:
In general, pattern classification algorithms assume that all the features are available during the construction of a classifier and its subsequent use. In many practical situations, data are recorded in different servers that are geographically apart, and each server observes features of local interest. The underlying infrastructure and other logistics (such as access control) in many cases do not permit continual synchronization. Each server thus has a partial view of the data in the sense that feature subsets (not necessarily disjoint) are available at each server. In this article, we present a classification algorithm for this distributed vertically partitioned data. We assume that local classifiers can be constructed based on the local partial views of the data available at each server. These local classifiers can be any one of the many standard classifiers (e.g., neural networks, decision tree, k nearest neighbor). Often these local classifiers are constructed to support decision making at each location, and our focus is not on these individual local classifiers. Rather, our focus is constructing a classifier that can use these local classifiers to achieve an error rate that is as close as possible to that of a classifier having access to the entire feature set. We empirically demonstrate the efficacy of the proposed algorithm and also provide theoretical results quantifying the loss that results as compared to the situation where the entire feature set is available to any single classifier.
- Publication Date:
Date Created: 20040529 Date Completed: 20040628 Latest Revision: 20191210
- Publication Date:
20231215
- Accession Number:
10.1162/089976604323057470
- Accession Number:
15165399
No Comments.