آکورد: سرویس جستجوی نظیر در نظیر برای برنامه های اینترنتی: بخش 1: | Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications: part 1:
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications: part 1:
A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item.
This paper presents Chord, a distributed lookup protocol that addresses this problem.
Chord provides support for just one operation: given a key, it maps the key onto a node.
Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps.
Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing.
Results from theoretical analysis, simulations, and ex- periments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Abstract: Peer-to-peer systems and applications are distributed systems without any centralized control or hierarchical organization, where the software running at each node is equivalent in functionality.
A review of the features of recent peer-to-peer applications yields a long list: redundant storage, permanence, selection of nearby servers, anonymity, search, authentication, and hierarchical naming.
Despite this rich set of features, the core operation in most peer-to-peer systems is efficient location of data items.
The contribution of this paper is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures.
The Chord protocol supports just one operation: given a key, it maps the key onto a node.
Depending on the application using Chord, that node might be responsible for storing a value associated with the key.
Chord uses a variant of consistent hashing  to assign keys to Chord nodes.
Consistent hashing tends to balance load, since each node receives roughly the same number of keys,and involves relatively little movement of keys when nodes join and leave the system.
Previous work on consistent hashing assumed that nodes were aware of most other nodes in the system, making it impractical to scale to large number of nodes.
In contrast, each Chord node needs “routing” information about only a few other nodes.
Because the routing table is distributed, a node resolves the hash function by communicating with a few other nodes.
In the steady state, in an -node system, each node maintains information only about other nodes, and resolves all lookups via mes- sages to other nodes.
Chord maintains its routing information as nodes join and leave the system; with high probability each such event results in no more than messages.
Three features that distinguish Chord from many other peer-to- peer lookup protocols are its simplicity, provable correctness, and provable performance.
Chord is simple, routing a key through a sequence of other nodes toward the destination.
A Chord node requires information about other nodes for efficient routing, but performance degrades gracefully when that information is out of date.
This is important in practice because nodes will join and leave arbitrarily, and consistency of even state may be hard to maintain.
Only one piece information per node need be correct in order for Chord to guarantee correct (though slow) routing of queries; Chord has a simple algorithm for maintaining this information in a dynamic environment.
The rest of this paper is structured as follows.
Section 2 com- pares Chord to related work.
Section 3 presents the system model that motivates the Chord protocol.
Section 4 presents the base Chord protocol and proves several of its properties, while Section 5 presents extensions to handle concurrent joins and failures.
Sec- tion 6 demonstrates our claims about Chord’s performance through simulation and experiments on a deployed prototype.
Finally, we outline items for future work in Section 7 and summarize our contributions in Section 8.