The Implementation of the Flow Control Algorithm for the Distributed 'Broadcast-Route' Networks in the Finite Message Size Case. |
||
The proposed algorithm goal is to achieve the infinite scalability of the distributed networks, which use the 'broadcast-route' method to propagate the requests through the network in case of the finite message size. The 'broadcast-route' here means the method of the request propagation when the host broadcasts the request it receives on every connection it has except the one it came from and later routes the responses back to that connection. 'Finite message size' means that the messages (requests and responses) can have the size comparable to the network packet size and are 'atomic' in a sense that another message transfer cannot interrupt the transfer of the message. That is, the first byte of the subsequent message can be sent over the communication channel only after the last byte of the previous message.
Even though the algorithm described below can be used for various networks with the 'broadcast-route' architecture, the primary target of the algorithm is the Gnutella network, which is widely used as the distributed file search and exchange system. Its protocol specifications can be found at:
http://www.gnutelladev.com/docs/capnbry-protocol.html
http://www.gnutelladev.com/docs/our-protocol.html
http://www.gnutelladev.com/docs/gene-protocol.html
To achieve the infinite scalability of the network, it is essential to have some sort of the flow control algorithm built into it. Such an algorithm for Gnutella and other similar 'broadcast-route' networks was described in "The Flow Control Algorithm for the Distributed 'Broadcast-Route' Networks with Reliable Transport Links" by S.Osokine [1], but it was designed in an assumption that the messages can be broken into the arbitrarily small pieces (continuous traffic case). This is not always the case - for example, the Gnutella messages are atomic in a sense mentioned above (several messages cannot be sent simultaneously over the same link) and can be quite large - several kilobytes. Thus it is necessary to adopt the continuous-traffic flow control algorithm to the situation when the messages are atomic and have finite size (discrete traffic case). This adaptation and the algorithms needed to achieve it are the subject of this document.
At the same time this document describes some flow control implementation details, which were not covered in [1] because of its high-level approach to the description of the algorithms; here the description is more detailed.
The flow control algorithm proposed in [1] uses the continuous-space equations to monitor to and control the traffic flows and loads on the network. That is, all the variables are assumed to be the infinite-precision floating-point numbers. For example, the typical equation ([1], Eq. 13 - describes the rate of the traffic to be passed to other connections) might look like this:
where x is the rate of the incoming forward-traffic (requests) passed by the Q-algorithm to be broadcast on other connections.
The direct implementation of such equations would mean that when, say, 40 bytes of requests would arrive on the connection, the Q-algorithm might require that 25.3456 bytes of this data should be forwarded for the broadcast and 14.6544 bytes should be dropped. Obviously this would not be possible for two reasons - first, it is not possible to send a non-integer number of bytes, and second, these 40 bytes might represent a single request.
The first obstacle is not very serious - after all, we might send 25 bytes and drop 15 bytes. The resulting error would not be a big one, and a good algorithm should be tolerant to the computational and rounding errors of such magnitude.
The second obstacle is worse - since the message (in this case, request) is atomic, it is not possible to break it into two parts, one of which would be sent, and another would be dropped. We have to drop or to send the whole request as an atomic unit. Thus regardless of whether we decide to send or to drop the messages which cannot be fully sent, the Q-algorithm would treat all the messages in the same way, effectively passing all the incoming messages for broadcast or dropping all of them. Such a behavior would introduce an error, which would be too large to be tolerated by any conceivable flow control algorithm, so it is clearly unacceptable and we have to invent some way to deal with this situation.
The similar problem arises when the fair bandwidth-sharing algorithm tries to allocate the space for the requests and responses in the packet to be sent out. Let's say we would like to evenly share the 512-byte packet between requests and responses, and it turns out that we have twenty 30-byte requests and a single 300-byte response - what should we do? Should we send a 510-byte packet with the response and 7 requests, and then send a 90-byte packet with 3 responses, or should we send a 600-byte packet with a response and 10 requests? The first decision would not evenly share the packet space and bandwidth, possibly resulting in the unfair bandwidth distribution, and the second would increase the connection latency because of the increased packet size. And what if the response is bigger than 512 bytes to begin with?
Such decisions can have a significant effect on the flow control algorithm behavior and should not be taken lightly. So first of all, let's draw a diagram of the Gnutella message routing node and see where are the blocks where these decisions will have to be made.
The Fig. 1 presents the high-level block diagram of the Gnutella router (the part of the servent responsible for the message sending and receiving):
Fig. 1. The Gnutella router diagram.
Essentially the router consists of several TCP connection blocks, each of which handles the incoming and outgoing data streams from and to another servent and of the virtual Connection 0 block. The latter handles the stream of requests and responses of the router's servent User Interface and of the Request Processing block. This block is called 'Connection 0', since the data from it is handled by the flow control algorithms of all other connection in a uniform fashion - as if it has come from the normal TCP Connection block. (See, for example, the description of the fairness block in [1].)
As far as the TCP connections are concerned, the only difference between Connection 0 and any TCP connection is that the requests arriving from this "virtual" connection might have a hop value equal to -1. This would mean that these requests have not arrived from the network, but rather from the servent User Interface Block through the "virtual" connection - these requests have never been transferred through the Gnutella network (GNet). The diagram shows that Connection 0 interacts with the servent UI Block through some API; there are no requirements to this API other than the natural one - that the router and the UI Block developers should be in agreement about it. In fact, this API might closely mimic the normal Gnutella TCP protocol on the localhost socket, if this would seem convenient to the developers.
The Request Processing Block is responsible for the servent reaction to the request - it processes the requests to the servent and sends back the results (if any). The API between the Connection 0 and the Request Processing Block of the servent obeys the same rules as the API between Connection 0 and the servent's User Interface Block - it is up to the servent developers to agree on its precise specifications.
The simplest example of the request is the Gnutella file search request - then the Request Processing block performs the search of the local file system or database and returns back the matching filenames (if found) as the search result. But of course, this is not an only imaginable example of the request - it is easy to extend the Gnutella protocol (or to create another one) to deliver the 'general requests', which might be used for many purposes other than the file searching.
The User Interface and the Request Processing Blocks together with their APIs (or even the Connection 0 block) can be absent if the Gnutella router (GRouter from now on) works without the User Interface or the Request Processing Blocks. That might be the case, for example, when the servent just routes the Gnutella messages, but is not supposed to initiate the searches and display the search results, or is not supposed to perform the local file system or database searches.
The word 'local' here does not necessarily mean that the file system or the database being searched is physically located on the same computer that runs the GRouter. It just means that as far as the other servents are concerned, the GRouter provides an access point to perform searches on that file system or database - the actual physical location of the storage is irrelevant. The algorithms presented here were specifically designed in such a way that regardless of the API implementation and its throughput the GRouter might disregard these technical details and act as if the local interface was just another connection, treating it in a uniform fashion. This might be especially important when the local search API is implemented as a network API and its throughput cannot be considered infinite when compared to the TCP connections' throughput. Thus such a case is just mentioned here and won't be presented separately - it is enough to remember that the Connection 0 can provide some way to access the 'local' file system or database.
In fact, one of the ways to implement the GRouter is to make it a 'pure router' - an application that has no user interface or request-processing capabilities of its own. Then it could use the regular Gnutella client running on the same machine (with a single connection to the GRouter) as an interface to the user or to the local file system. Other configurations are also possible - the goal here was to present the widest possible array of implementation choices to the developer.
However, it might be the case that the Connection 0 would be present in the GRouter even if it does not perform any searches and has no User Interface. For example, it might be necessary to use the Connection 0 as an interface to the special requests' handler. That is, there might be some special requests, which are supposed to be answered by the GRouter itself and would be used by the GNet itself for its own infrastructure-related purposes. One example of such a request is the Gnutella network PING, used (together with its other functions) internally by the network to allow the servents to find the new hosts to connect to. Even if all the GRouter connections are to the remote servents, it might be useful for it to answer the PING requests arriving from the GNet. In such a case the Connection 0 would handle the PING requests and send back the corresponding responses - the PONGs, thus advertising the GRouter as being available for connection.
Still, in order to preserve the generality of the algorithms' description in this document we assume that all the blocks shown in the diagram are present.
Finally, the word 'TCP' in the text and the diagram above does not necessarily mean a regular Gnutella TCP connection, though this is certainly the case when the presented algorithms are used in the Gnutella network context. However, it is possible to use the same algorithms in the context of other similar 'broadcast-route' distributed networks, which might use different transport protocols - HTTP, UDP, radio broadcasts - whatever the transport layers of the corresponding network would happen to use.
Having said that, we'll continue to use the words 'TCP', 'GNet', 'Gnutella', etc throughout this document to avoid the naming confusions - it is easy to apply the approaches presented here to other similar networks.
Now let's go one level deeper and present the internal structure of the Connection blocks shown in Fig. 1.
The Connection block diagram is shown in Fig. 2:
Fig. 2. The Connection block diagram.
The messages arriving from the network are split into three streams:
The messages to be sent to the network arrive through several streams:
All these messages are processed by the 'RR-algorithm & OFC block' [1], which decides when and which messages to send; it is this block which implements the Outgoing Flow Control and Fair Bandwidth Sharing functionality described in [1]. It decides how much data can be sent over the outgoing TCP connection, and how the resulting outgoing bandwidth should be shared between the logical streams of requests and responses and between the requests from different connections. In the meantime the messages are stored in the hop-layered request buffers in case of the requests and in the response buffer with timeout in case of the responses.
The OFC messages are never stored - the PONGs are just used to control the sending operations, and the PINGs should cause the immediate PONG-sending. Since it has been recommended in [1] to switch off the TCP Nagle algorithm, this PONG-sending operation should result in an immediate TCP packet sending, thus minimizing the OFC PONG latency for the OFC algorithm on the peer servent. Note that if the peer servent does not implement the similar flow control algorithm, we cannot count on it doing the same - it is likely to delay the OFC PONG for up to 200 ms because of its TCP Nagle algorithm actions. This might result in a lower effective outgoing bandwidth of the GRouter connection to such a host; however, if the 512-byte packets are used, the resulting connection bandwidth can be as high as 25-50 kbits/sec. Still, it is expected that the connection management algorithms (which are outside the scope of this document) would try to connect to the hosts that use the similar flow control algorithms on the best-effort basis.
It should be noted that this approach to OFC PING handling effectively excludes the OFC PONGs from the Outgoing Flow Control algorithm. Since these PONGs are sent at once and thus have the highest priority in the outgoing stream, a DoS attack is possible when the attacker floods its peers with 0-hop, 1-TTL PINGs and causes them to send only PONGs on the connections to the attacker. This can be especially easy to achieve when the attacked hosts have an asymmetric (ADSL or similar) connection.
However, this attack is likely to cause the extremely high latency and/or TCP buffer overflow on the attacked host's connection to the attacker and result in the connection being closed, which would terminate the attack, as far as the attacked host is concerned. Furthermore, this attack would not propagate over the GNet since by definition it can be performed only with 1-TTL PINGs, which can travel only over 1-hop distance.
The diagrams presented in the previous sections show the GRouter and the flow control algorithm building blocks and the interaction between them. These diagrams essentially illustrate the flow control algorithm as presented in [1] - no assumptions were made so far about the algorithm changes necessary to allow for the atomic messages of the finite size.
However, Fig. 2 makes it easy to see what parts of the GRouter are affected by the fact that the data flow cannot be treated as a sequence of the arbitrarily small pieces. The affected blocks are the ones that make the decisions concerning the individual messages - requests and responses. Whenever the decision is made to send or not to send a message, to transfer it further along the data stream or to drop - this decision necessarily represents a discrete 'step' in the data flow, introducing some error into the continuous-space data flow equations described in [1]. The size of the message can be quite large (at least on the same order of magnitude as the TCP packet size of 512 bytes suggested in [1]). So the blocks which make such decisions have to implement the special algorithms which would bring the data flow averages to the levels required by the continuous flow control equations.
The blocks that have to make the decisions of that nature and which are affected by the finite message size are shown as circles in Fig. 2. These are the 'Q-algorithm' block and 'RR-algorithm & OFC block'.
The 'Q-algorithm' block tries to determine whether the responses to the requests coming to it are likely to overflow the outgoing TCP connection bandwidth, and if this is the case, limits the number of requests to be broadcast, dropping the high-hop requests. The output of the Q-algorithm is defined by the Eq. 13 in [1] and is essentially a percentage of the incoming requests' data that the Q-algorithm allows to pass through and to be broadcast on other connections. This percentage is a floating-point number, so it is difficult to broadcast an exact percentage of the incoming request data within a finite time interval - there's always going to be an error proportional to the average request size. However, it is possible to approximate the precise percentage value by averaging the finite data size values over a sufficiently large amount of data. The description of such an averaging algorithm will be presented further in this document.
The 'RR-algorithm & OFC block' has to assemble the outgoing packets from the messages in the hop-layered request buffers and in the response buffer. Since these messages have finite size, typically it is impossible (and not really necessary) to assemble the exactly 512-byte packet or to achieve the precise fair bandwidth sharing between the logical streams coming from different buffers as defined in [1] within a single TCP packet. Thus it is necessary to introduce the algorithms that would define the packet-filling and packet-sending procedures in case of the finite message size. These algorithms should follow the general guidelines described in [1], but at the same time they should be able to work with the (possibly quite large) finite-size messages. That means that these algorithms should achieve the general flow control and the bandwidth sharing goals and at the same time should not introduce the major problems themselves. For example, the algorithms should not make the connection latency much higher than the latency that is inevitably introduced by the presence of the large 'atomic' messages.
To summarize, the algorithms required in the finite-size message case can be roughly divided into three groups:
These algorithm groups are described below:
The Outgoing Flow Control block algorithm [1] suggests that the packet with messages should have the size of 512 bytes and that it should be sent at once after the OFC PONG is received, which confirms that all the previous packet data has been received by the peer. In order to minimize the transport layer header overhead, the G-Nagle algorithm has been introduced. This algorithm prevents the partially filled packets' sending if the OFC PONG has been already received, but the G-Nagle timeout time TN (~200 ms) has not passed yet since the last packet sending operation. This is done to prevent the large number of very small packets being sent over the low-latency (<200 ms roundtrip time) links.
This short description of the Outgoing Flow Control block operation leaves out many important issues related to the packet size and to the time when it should be sent. The rest of this section explains these issues in detail.
The packet size (512 bytes) has been chosen as a compromise between two contradictory requirements. First, it should be able to provide a reasonably high connection bandwidth for the typical Internet roundtrip time (~30-35 kbits/sec @ 150 ms), and second, to limit the connection latency even on the low-bandwidth physical links (~900 ms for the 33 kbits/sec modem link shared between 5 connections).
So this packet size value requirement does not have to be adhered to precisely. In fact, different applications may choose a different packet size value or even make the packet size dynamic, determining it in run-time from the channel data transfer statistics and other considerations. What is important is to remember that the packet size growth can increase the connection latency - for example, the modem link mentioned above can have the latency as high as 1,800 ms if the packet size is 1KByte.
Which brings an interesting dilemma: what if the message size is higher than 512 bytes? Even if nothing else is transmitted in the same packet, placing just this one message into the packet can lead to the noticeable latency increase. The Gnutella v.0.4 protocol, for example, limits the message size with at least 64 KBytes (actually the message field size is 4 bytes, so formally the messages can be even bigger). Should the OFC block transmit such a message as a single packet, break it down into multiple packets or just drop it altogether, possibly closing the connection?
In practice the Gnutella servents often choose the third path for the practical reasons, limiting the message size with various numbers (3 KBytes recommended in [1], 256-byte limit for requests used by some other implementations, etc). But here we will consider the most general situation when the maximum message size can be several times higher than the recommended packet size, assuming that the large messages are necessary for the application under the consideration. It is easier to drop the large packets if the GNet application does not require those than to reinvent the algorithms intended for the large messages if it does.
So the first choice to be made is to whether to send a large message in one packet or to split it between the several packets? Note that these 'packets' we are discussing here are the packets in terms of TCP/IP, not in terms of the OFC block, which tries to place the OFC PING as a last message in every packet it sends. Since TCP is a stream-oriented protocol that tries to hide its internal mechanisms from the application-level observer, as far as the application code is concerned, this OFC PING is an only (semi)reliable sign of the end of the sent data block. (Actually the peer might lose it and the PING retransmission might be required.) For this reason throughout this document the sequence of data bytes between two OFC PINGs, including the second one of them, is referred to as a 'packet' - formally speaking, the application-level code cannot be sure about the real TCP/IP packets used to transmit that data. The packets in terms of TCP/IP protocol are referred to as 'TCP[/IP] packets'
Of course, when the TCP Nagle algorithm is switched off (as recommended in [1]), typically the send() operation performed by the OFC block really does result in a TCP/IP packet being immediately sent on the wire. However, this is not always the case. It might so happen that for the reasons of its own (the absence of ACK for the previously sent data, the IP packet loss, small data window, etc) the TCP layer will accept the buffer from the send() command, but won't actually send it at once. When this buffer will be really sent it might be sent in the same TCP packet with a previous or a subsequent buffer. If the OFC block does not break messages into smaller pieces, this is impossible, since the OFC block would perform no sending operation until the previous one would be confirmed by the PONG from the peer. But if the large message is sent in several 512-byte chunks, it can be the case - several of these chunks can be 'glued together' by the TCP layer into a single TCP packet.
On the other hand, when a very large (several kilobytes) message is sent in a single send() operation, the TCP layer can split it into several actual TCP/IP packets, if the message is too big to be sent as a single TCP/IP packet.
So the decision we are looking for here is not final anyway - the TCP layer can change the TCP/IP packets' layout, and the issue here is what would be the best way to do the send() operations, assuming that typically the TCP layer would not change the decisions we wish to make if the Nagle algorithm is switched off.
Assuming for a second that the actual TCP/IP packet layout corresponds precisely to the send() calls we make in the GRouter, let's ask ourselves a question: what are the advantages and disadvantages of both approaches?
On one hand, sending a big message in a single packet would undoubtedly result in higher connection bandwidth utilization when the OFC algorithm is used. However, this might cause the connection latency to increase and open the way for the big-packet DoS attack. Besides, if the higher connection bandwidth utilization is desirable, it is better to do it in a controlled way - by increasing the packet size from 512 bytes to a higher value instead of relying on the randomly arriving big messages to achieve the same effect. It is also important to remember that in many cases the higher bandwidth utilization can have a detrimental effect on the concurrent TCP streams (HTTP up/downloads, etc) on the same link, so it might be undesirable in the first place.
So the recommended way is to split the big message into several packets. But this might have some negative consequences in the context of the existing network, too - for example, some old Gnutella clients seemed to expect the message to arrive in the single packet and the message that has been split into several packets might cause them to treat it incorrectly. Even though these clients are obviously wrong, if there are enough of these in the network, it might be a cause for concern. Fortunately this is just a backward compatibility problem in the existing Gnutella network, and in this case there is another way to deal with such a problem. Since the Gnutella network message format is clearly documented, it might be a good idea to split the big incoming message into several smaller messages of <= 512 bytes each.
In fact, such a solution (when it is possible) is an ideal variant of dealing with big messages. When the big message is split into several messages, it makes it possible to send other messages between these on the same TCP connection - not just on the same physical link, as it is the case when the big message is just split into several TCP packets. This would minimize the latency not only for the different connections on the same physical link, but also for the connection used to transmit such a message. For example, the requests being sent on the same connection would not have to wait until the end of the big message transfer, but could be sent 'in the middle' of such a message. As a side benefit, the attempt to perform the 'big message' DoS attack would be thwarted by the Response prioritization block in Fig. 2. The resulting sub-messages with a high response volume would be shifted to the response buffer tail, where they might be even purged by the buffer timeout procedure if the bandwidth would not be enough to send those.
To summarize, the GRouter should try to break all the messages into small (<=512 byte) messages. If this is not possible, it should send the big unbreakable messages in the <=512-byte sending operations (TCP packets), unless it is de facto impossible due to the backward compatibility issues on the network. Since it is impossible to append the OFC PING to such a packet (it would be in the middle of the message), these TCP packets should be sent without waiting for the OFC PONGs, and the OFC PING should be appended to the last packet in a sequence. The GRouter should never send the messages with a size bigger than some limit (3 Kbytes or so, depending on the GNet application), dropping these messages as soon as they are received.
The related issue is the GRouter behavior towards the messages that cause the packet overflow - when the message to be placed next into the non-empty packet by the RR-algorithm makes the resulting packet bigger than 512 bytes. Several actions are possible:
First, the message sending can be postponed and the packet of less than 512 bytes can be sent.
Second, the message can be placed into the packet anyway, and the packet, which is bigger than 512 bytes can be sent.
And third, n exactly 512-byte packets (where n>=1) can be sent with the last message head and no OFC PINGs; then a packet with the last message tail and OFC PING should immediately follow this packet (or packets).
The general guideline here is that (backward compatibility permitting) the average size of the packets sent as the result should be as close to 512 as possible. If we designate the volume of the packet before the overloading message as V1, the size of this message as V2, and the desired packet size (512 bytes in our case) as V0, we will arrive to the following average packet size values Vavi:
In the first case,
In the second case,
And in the third case,
So whenever this choice presents itself, all three (or more, if V2 is big enough to justify n>1) Vavi values should be calculated, and the method, which gives us the lowest value of abs(Vavi - V0) (or some other metrics, if found appropriate) should be used.
It has been already mentioned that the packet (in OFC terms) should not be sent before the OFC PONG for the previous packet 'tail PING' arrives. That PONG shows that the previous packet has been fully received by the peer. Furthermore, if the PONG arrives in less than 200 ms after the previous sending operation and there's not enough buffered data to fill the 512-byte packet, this smaller packet should not be sent before this 200-ms timeout expires (G-Nagle algorithm).
However, these requirements are introduced by the OFC (Outgoing Flow Control) block [1] for the latency minimization purposes and define just the earliest possible sending time. In reality it might be necessary to delay the packet sending even more. The reason for this is that the sent packet size and its PONG echo time are the only criteria that can be used by the upstream algorithm blocks (RR-algorithm and the Q-algorithm) to evaluate the channel bandwidth, which is needed for these blocks to operate. No other data is available for that purpose, and even though it might be possible to gather various channel statistics, such data would be extremely noisy and unreliable. Typically multiple TCP streams share the same connection and it is very difficult to arrive to any meaningful results under such conditions. In fact, in the absence of the bandwidth reservation block (like the one defined by the RSVP protocol) in the TCP layer of the network stack this task seems to be just plain impossible. Any amount of statistics can be made void at any moment by the start of the FTP or HTTP download by some other application not related to the GRouter.
When the packets have the full 512-byte size, it is possible to approximate the bandwidth by the equation:
where B is the bandwidth estimate, V0 is the full packet size (512 bytes) and Trtt is the GNet one-hop roundtrip time, which is the interval between the OFC packet sending time and the OFC PONG (reply to the 'trailer' PING of that OFC packet) receiving time.
Even though this bandwidth estimate is not very accurate and varies wildly, it is still possible to use it. It can be averaged over the large time intervals (in case of the Q-algorithm) or used indirectly (when the bandwidth sharing is calculated in terms of the parts of packet dedicated to the different logical sub-streams in case of the fair bandwidth-sharing block).
The situation becomes more complicated when there's not enough data to fill the full 512-byte packet at the moment when this packet can be already sent from the OFC block standpoint. Let us consider the model situation when the total volume of requests passing through the GRouter is negligible (each request causes multiple responses in return). Then the connection bandwidth would be used mostly by the responses, and the Q-algorithm would try to bring the bandwidth used by responses to the B/2 level, as shown in Fig. 3:
Fig. 3. The bandwidth layout with a negligible request volume.
In order to do that, the Q-algorithm is supposed to know the bandwidth B - otherwise it cannot judge how many requests should it broadcast in order to receive the responses that would fill the B/2 part of the total bandwidth. Let's say that somehow this goal has been reached and the data transfer rate on the channel is currently exactly B/2. Now we want to verify that this is really the case by using the observable traffic flow parameters and maybe make some small adjustments to the request flow if B is changing over time. Would the number of requests' data be enough to fill the 'empty' part of the bandwidth in Fig. 3, then (5) could be used to estimate the total bandwidth B. Then the packet volume would be more or less equally shared between the requests and responses, and we should try to reach exactly the same amount of request and response data in the packet by varying the request stream. (Not the request stream in this packet, but the one in the opposite direction, which is not shown in Fig. 3.)
But since there are virtually no requests, in the state of equilibrium (constant traffic stream and roundtrip time) we have to estimate the full bandwidth B using just the size of the packets with back-traffic (response) data V and the GNet roundtrip time Trtt.
The problem is, it is very difficult to estimate the total bandwidth from that data. If we assume that we are sending packets as soon as the OFC PONG arrives and that the sending rate is b, we arrive to the following relationship between V, Trtt and b:
Now, how should we arrive to the conclusion about whether b is less, more or equal to B/2 from that information, if we have no idea what is the value of B? And we need this answer in order to figure out whether to throttle down the broadcast rate, to increase it or to leave it at the same level (Eq. 10 in [1]).
One might expect that if we can effectively change the bandwidth allocation by varying the volume of data in the full (512-byte) packet, we might try to do the same in case of the partially filled packet and estimate the bandwidth B as Bappr = b * V0 / V. However, such an approach is likely to be unsuccessful. The reason for this is that in case of the full packet, its expected average roundtrip time <Trtt> does not change when the packet internal layout is changed; so the response sending rate b is actually related to the full connection bandwidth (5) by the equation:
This equation can be used only if the packet is full and V is not the packet size, but the size of the response data in this 512-byte packet.
On the contrary, if the packet is just partially filled and V is its total size, its expected roundtrip time Trtt is not constant and might depend on the packet size V. For example, if the connection is sufficiently slow, Trtt might be proportional to V. Then the value of B estimated from (7) as b*V0/V (when V is the total packet size) would give the results that are dramatically different from any reasonably defined total bandwidth B - this estimate would go to infinity as the packet size V goes to zero! In fact, even the state of the equilibrium itself as defined above (constant V, b and Trtt) would be impossible in this case - if Trtt=V/B and V=b*Trtt, then for a constant-rate response stream b
which means that for every response rate b lower than the actual connection bandwidth B, the values of V and Trtt would decline exponentially over time until the G-Nagle timeout or the zero-data roundtrip time is reached. That might result in the very small values of V (packet size) and huge bandwidth estimate values, possibly causing the self-sustained uncontrollable oscillations of the request and response traffic defined by the Q-algorithm.
For these reasons, it is highly desirable to introduce a controlled delay into the packet sending procedure in order to evaluate the target channel bandwidth B when the actual traffic sending rate b is less than B. This delay provides an only way to stabilize the packet size V at some reasonable level (V~V0 and V does not go to zero) when the actual traffic rate b is less than B (defined by (5), if it would be possible to send the full 512-byte packets. Actually this 'theoretical' value of B is not directly observable when the total traffic is low and V<V0. The very fact that B is not directly observable under these conditions is what has caused our problems to begin with.)
This delay value (wait time) Tw is defined as the extra time that should pass after the OFC PONG arrival time before the packet should actually be sent and is calculated with the following equations:
The equations (9-11) assume that the G-Nagle algorithm is not used (Trtt + Tw >= TN; TN=200 ms); if this is not the case, the G-Nagle algorithm takes priority:
It is easy to see that in case of the full packet (V=V0 and b=B), Tw=0. The delay is effectively used only when it is necessary to do the bandwidth estimate in case of the low traffic (b<B). The equation (10) caps the Tw growth in case of the small packet size.
Then the total theoretical connection bandwidth B is estimated by its approximate value Bappr, which is calculated as:
The full description of reasons that led to the introduction of Tw and Bappr in the form defined by (9-14) is pretty lengthy and is outside the scope of this document. However, it should be said that unfortunately it does not seem possible to have a precise estimate of B even when a delay is used. The error of Bappr when compared to B as defined by (5) depends on many factors. Shortly speaking, different forms of the functional relationship between Trtt and V (the form of the Trtt(V) function) can influence this error significantly. At the same time, it is very difficult to find the actual shape of the Trtt(V) function with any degree of accuracy under the real network conditions, and this function's shape can change faster than the statistical methods would find the reasonably precise shape of this function anyway.
So the equations (9-14) represent the result of the attempts to find a bandwidth estimate that would produce a reasonably precise value of Bappr in the wide range of the possible Trtt(V) function shapes. The analysis of different cases (different Trtt(V) function shapes, G-Nagle influence, etc) shows that if the Q-algorithm tries to bring the value of b to the rho*B level, the worst possible estimate of B using the equations (9-14) results in a convergence of b to:
which for the rho=0.5 suggested in [1] results in b actually converging to the level 0.707*B instead of 0.5*B when the request traffic is nonexistent (as in Fig. 3). Naturally, in the real network at least some request traffic would be present, bringing the actual total traffic closer to its theoretical limit B (as defined in (5)) and making the error even smaller. However, if this 40% increase in the response traffic happens to be a problem under some real network conditions because of the fractal character of the traffic and would cause the frequent response overflows, it is always possible to use smaller values of rho. For example,
even in the biggest possible error case.
Just to illustrate the equations (9-14) operation, let's have a look at the same shape of the Trtt(V) function as the one considered earlier: Trtt = V / B.
Then the equation (13) would give us the following bandwidth approximation:
and the Q-algorithm would bring the response traffic rate to
The response stream with this rate would, in turn, result in the packets of size
Now, since Trtt=V/B, we arrive to
Combining this with (18), we receive
and
First, this result verifies the correctness of substitution of equation (9) for Tw into (19) and the correctness of using the equation (13) as the basis for (17). And second, it shows that in that case the state of the equilibrium (constant V, b and Trtt) is achievable for the traffic and the response bandwidth error is exactly the one suggested by the equation (15). (This example uses a pretty 'bad' shape of the Trtt(V) function from the Bappr error standpoint - we could have analyzed many cases with the lower or even nonexistent Bappr error, but it is useful to have a look at the worst case).
Finally it should be noted that the equations (9-14) contain only the packet total size and roundtrip times and say nothing of whether the packet carries the responses, the requests or both. Even though we used the model situation of nonexistent request traffic (Fig. 3) to illustrate the necessity of this approach to the bandwidth estimate, the same equations should also be used in the general case, when the packet carries the traffic of both types. In fact, it can be shown that the error of the Bappr estimate approaches zero regardless of the Trtt(V) function shape when the total packet size V (responses and requests combined) approaches V0 (512 bytes).
The packet layout and the bandwidth sharing between the sub-streams are defined by the Fairness Block algorithms [1]. The Fairness Block goal is twofold:
The first goal is achieved by 'softly reserving' some part of the outgoing connection bandwidth Gi for the back-traffic and the remainder of the bandwidth - for the forward-traffic. The bandwidth 'softly reserved' for the back-traffic is Bi and the bandwidth 'softly reserved' for the forward-traffic is Fi:
Fig. 4. The bandwidth reservation layout.
'Softly reserved' here means that when, for whatever reason, the corresponding stream does not use its part of the bandwidth, the other stream can use it, if its own sub-band is not enough for it to be fully sent out. But if the sum of the desired back- and forward-streams to be sent out exceeds Gi, each stream is guaranteed to receive at least the part of the total outgoing bandwidth Gi which is 'softly reserved' for it (Bi or Fi) regardless of the opposing stream bandwidth requirements. For brevity's sake, from now on, we will actually mean 'softly reserved' when we will apply the word 'reserved' to the bandwidth.
In Fig. 4, the current back-traffic bi is shown to be two times less than Bi, since Q-algorithm tries to keep the back-stream at that level; however, it can fluctuate and be much less than Bi if the requests do not generate a lot of back-traffic, or temporarily exceed Bi in case of the back-traffic burst. If bi<=Bi, the entire bandwidth above bi is available for the forward-traffic. If the desired back-traffic exceeds Bi, the actual back-traffic bi can be higher than Bi only if the desired forward-traffic from the other connections yi is less than Fi; otherwise, the back-traffic fully fills the Bi sub-band and the forward-traffic fully fills the Fi. So the actual forward-traffic stream foi is equal to the desired forward-traffic yi only if either yi<Fi, or yi+bi<Gi; otherwise, foi<yi and some forward-traffic (request) messages have to be dropped.
The method proposed to calculate the bandwidth reserved for the back-traffic Bi in [1] (Eq. 24-26) essentially tries to achieve the convergence of the back-traffic bandwidth Bi to some optimal value:
This optimal value was chosen in such a way that it would protect the forward-traffic (requests from other connections) in case of the back-traffic (response) bursts - the bandwidth reserved for the forward-traffic (Fi=Gi-Bi) should be no less than half of the average forward traffic <foi> on the connection. Thus the back-traffic bursts cannot significantly decrease the bandwidth part used by the forward traffic or completely shut off the forward traffic data flow. Similarly, the back-traffic is protected from the forward-traffic bursts - from the equation (23) it is clear that Bi>=0.5*Gi, so at least half of the connection bandwidth is reserved for the back-traffic in any case.
However, in case of the finite message size, the equation (23) has one problem. Let us consider a 'GNet leaf' structure, consisting of a GRouter and a few neighbors, none of which are connected to anything besides the GRouter. Such a configuration is shown in Fig. 5:
Fig. 5. The 'GNet leaf' configuration.
Here 'Connection i' connects this 'leaf' structure to the rest of the GNet. We will be interested in the traffic passing through this connection from right to left - from the 'leaf' to the GNet. The GRouter Fairness Block controls this traffic. Such a configuration is typical for the various 'GNet reflectors', which act as an interface to the GNet for several servents, or for the GRouter working in a 'pure router' mode. Then the GRouter has no user interface and no search block of its own and just routes the traffic for another servent (or several servents). Typically that configuration would result in a very low volume of request data passing through this 'Connection i' from right to left, since the 'leaf' has just a few hosts.
Because of this, the equation (23) in the GRouter fairness block might bring the value of Bi very close to Gi for that connection. To be precise, the stable value of Fi would be:
where <foi> is a very low average forward-traffic sending rate. In the continuous-traffic model Fi=const, since this low sending rate <foi> is represented by the fairly constant low-volume data stream. The equation (23) convergence time (defined by the Eq. 15 in [1]) is irrelevant in that case.
The atomic messages (requests) of the finite size change this situation dramatically. Then every request represents a traffic burst of the very high instant magnitude (mathematically, it can be described as the delta-function - the infinite-magnitude burst with the finite integral equal to the request size). The equation (23) will try to average the sending rate, but since it has a finite convergence (averaging) time, in case the average interval between finite-size requests is bigger than the convergence time, the plot of Fi versus time will look like this:
Fig. 6. The finite-size request rate averaging.
The plot in Fig. 6 makes it clear that if the average interval between requests is bigger than the equation (23) convergence time, the bandwidth Fi reserved for the requests can be arbitrarily small at the moment of the next request arrival. Since the equation (23) convergence time is not related to the request frequency (which might be determined by the users searching for files, for example), the small frequency of the requests leads to the small value of Fi when the request does arrive on the connection to be transmitted.
So when the request arrives, the bandwidth reserved for it might be very close to zero. If the back-traffic from the 'leaf' does not have a burst at that moment, it would occupy just about one half of the available bandwidth Gi, and the request transmission would not present any problem. But if the back-traffic experiences a burst, the bandwidth available for the request transmission would be just a very small reserved bandwidth Fi. Thus the time needed to transmit the finite-size request might be very large, even if the request would not be atomic. (In that case the start of the request transmission would gradually lower the Bi and this request transmission would take an amount of time comparable to the convergence time of the equation (23)).
However, since the request is atomic (unbreakable) and cannot be sent in small pieces between the responses on the same connection, the delay might be even bigger. In order to make sure that the sending operation does not exceed the reserved bandwidth, the sending algorithm has to 'spread' the request-sending operation over time, so that the resulting average bandwidth would not exceed a reserved value. Since from the sending code standpoint the request is sent instantly in any case, the 'silence period' of the Ts=Vr/Fi length would have to be observed after the request-sending operation in order to achieve that goal, where Vr is the request size. This 'silence period' can be arbitrarily long, because equation (23) decreases Fi in an exponential fashion as the time since the last request arrival keeps growing. If the next request to be sent arrives during this 'silence period' (which is quite likely when Ts grows to infinity), this new request either has to be kept in the fairness block buffers until the back-traffic burst ends, or to be just dropped.
Neither outcome is particularly attractive - on one hand, it is important to send all the requests, since the 'Connection i' is the only link between the 'leaf' and the rest of the GNet. And on the other hand, it is intuitively clear that the latency increase due to the new request being buffered for the rest of the 'silence period' is not necessary. After all, the request traffic from the 'leaf' is very low, and it would seem that sending all the requests without delays should not present any problem.
So the fairness block behavior seems be counterintuitive: if it is intuitively clear that the requests can be sent at once, why the equation (23) does not allow us to do that? To explain that, it should be remembered that the exponential averaging performed by the differential equation (23) (equation (26) in [1]) was designed to handle the continuous-traffic case. This averaging method assumes that the traffic being averaged consists of a very large number of very small and very frequent data chunks, which is clearly not the case in the example above. When the time interval between the requests exceeds the averaging (equation (23) convergence) time, these equations cease to perform the averaging function, which results in the negative effects that we could observe here.
Besides, the Fairness Block equations were designed to protect the average forward-traffic from the back-traffic bursts and other way around. These equations do nothing to protect the forward-traffic bursts, since it was assumed that it is enough to reserve the forward-traffic bandwidth that would be close to the average forward-traffic-sending rate. This approach really works when the forward-traffic messages (requests) are infinitely small. However, as the averaging functionality breaks down with the growth of the interval between requests, and each request is a traffic burst, nothing protects this request from the simultaneous burst in the back-traffic stream, resulting in the latency increase and possibly in the request loss.
Thus it is clear that the finite-message case presents a very serious problem for the Fairness Block, and something should be done to deal with the situations like the one presented above. In principle, it might be possible to extend the Fairness Block equations to handle the case of the 'delta-function-type' (non-continuous) traffic. However, such an approach is likely to be complicated, so here we suggest a radically different solution.
Let us make both reserved sub-bands (Bi and Fi) fixed:
and compare the resulting bandwidth layout with the 'ideal' layout in an assumption that such a layout really does exist and can be found.
Obviously the solution proposed in (25,26) is not an ideal one - it does not take into consideration the different network situations, different relationships between the forward- and backward-traffic rates and so on. Thus it is expected that in some cases such a bandwidth layout would result in a smaller connection traffic than the 'ideal' layout, effectively limiting the 'request reach': the servents would be able to reach fewer other servents with their requests and would receive less responses in return.
Let's check the maximal theoretical throughput loss for the back- and forward-traffic streams in case of the fixed bandwidth layout (25,26).
The biggest possible average back-traffic is
and the average fixed-bandwidh traffic is
Thus the worst theoretical response throughput loss is about 33 %. However, the fixed bandwidth layout is going to be used together with the bandwidth estimate algorithm described in section 6.2 of this document. That algorithm is capable of increasing the back-traffic by a factor of 0.707 (Eq. (15) with rho=0.5) in some cases, so these errors might even cancel each other, possibly resulting in an average back-traffic <bi>~0.47*Gi, which is pretty close to an ideal value.
The biggest possible average forward-traffic is
In case of the fixed bandwidth the average forward traffic is limited by the average back-traffic (<foi><=<Gi-bi>). However, since the average back-traffic should not take more than 1/3 of the whole bandwidth (Eq. (28)), then
which represents a 33% theoretical request throughput loss.
At the first glance, one might expect that in the very worst case (back-traffic errors cancel and <bi>=0.47*<Gi>), the average forward-traffic would be limited by the expression <foi>=0.53*<Gi>, meaning that a 47% request throughput loss is possible. However, for the equation (15) to be applicable, the total traffic bi+foi has to be less than Gi. But if this is the case, there are not enough requests to fill the full available bandwidth (Gi-bi) anyway. So then the fixed bandwidth layout approach does not limit the request stream-sending rate and as far as the forward stream is concerned, there are no disadvantages introduced by the fixed bandwidth layout at all.
Thus the worst possible throughput loss for both back- and forward-traffic is about 33% versus the 'ideal' bandwidth-sharing algorithm, assuming that such an algorithm exists and can be implemented. This throughput loss is not very big and is fully justified by the simplicity of the fixed bandwidth sharing. It is also important to remember that this number represents the worst throughput loss - in real life the forward-traffic throughput loss might be less if the response volume is low. Then bi<Bi/2 and the bandwidth available to the forward-traffic is going to be bigger. All these considerations make the fixed bandwidth sharing as defined by (25,26) the recommended method of bandwidth sharing between the request and response sub-streams.
In practice the value of Gi can fluctuate with each packet and is not known before the packet is actually sent, making the values of Bi and Fi also hard to predict. This makes it very difficult to fulfill the bandwidth reservation requirements (25,26) directly, in terms of the data-sending rate. The relationship between the bandwidths of the forward- and back-streams has to be maintained indirectly, by varying the amount of the corresponding sub-stream data placed into the packet to be sent. Naturally, the presence of the finite-size atomic messages complicates this process further, making the precise back- and forward-data ratio in the packet hard to achieve.
Let us start with a simpler task and imagine that the traffic can be treated as a sequence of the arbitrarily small pieces of data and see how the bandwidth sharing requirements (25,26) would look in terms of the packet layout.
The packet to send is assembled from the continuous-space data buffers (Hop-layered request buffers and a Response buffer in Fig. 2) when the packet-sending requirements established in section 6.2 have been fulfilled. To simplify the task even more, let's imagine that we have a single request buffer, so the packet is filled by the data from just two buffers - the request and the response one.
If the summary amount of data in both buffers does not exceed the full packet size V0 (512 bytes), the packet-filling procedure is trivial - both buffers' contents are fully transferred into the packet, and the resulting packet is sent, leaving us with empty request and response buffers. In terms of the bandwidth usage, it corresponds to the case of the bandwidth non-overflow, and in case the total amount of data sent is even less than 512 bytes, the equations (9-11) show that an additional wait time is required before sending such a packet. Which means that the bandwidth is not fully utilized - we could increase the sending rate by bringing the waiting time Tw to zero and filling the packet to its capacity, if we'd have more data in request and response buffers.
Looking at the bandwidth reservation diagram in Fig. 4, we see that in such a case (bi+foi<=Gi) the bandwidth reservation limits Bi and Fi are irrelevant. These are the 'soft' limits and have to be used only if the sum of the desired back- and forward-traffic sending rates bi and yi exceeds the full bandwidth Gi.
Of course, even though Bi is not used to limit the traffic, it still has to be communicated to the Q-algorithm of that connection so that it could control the amount of request data it passes further to be broadcast. In order to find the Bi, the total channel bandwidth Gi has to be approximated by the Bappr found from (13). Then the Bi estimate is found from (26) as
Naturally, this can be done only postfactum, after the packet is sent and its PONG echo is received from the peer, but that does not matter - the Q-algorithm equations [1] are specifically designed to be tolerant to the delayed and/or noisy input.
Now let's consider the case when the summary amount of data in the request and the response buffers exceeds the desired packet size V0 (512 bytes). Since we are still working in the continuous-traffic model, it is clear that the packet size should be exactly V0 and the wait time Tw should be zeroed. And now we face a question - how much data from each buffer should be placed into the packet in order to make the packet of exactly V0 size and satisfy the bandwidth reservation requirement (25,26)?
Let us designate the amount of forward (request) data in the packet as Vf and the amount of back-data (responses) as Vb. Obviously,
After the packet PONG echo returns and the total bandwidth Gi estimate Bappr is calculated from (14), it will be possible to find the value of Bi from (31) as
and the value of Fi as
At the same time (after the PONG echo is received) it will be possible to find the sending rates of the forward- and back-traffic as
after which we would be able to see whether the values of foi and bi exceed the reserved bandwidth values Fi and Bi or not. However, that would be too late - we need this answer before we send the packet in order to determine the desired values of Vf and Vb for it. Fortunately, even before we send the packet, from (34) and (35) it is clear that
which means that if bi=Bi and foi=Fi, then
So using (39,40) we can determine whether the bandwidth reservation requirements (25,26) will be satisfied even before we send the packet. It should be remembered, though, that the bandwidth reservation requirements (25,26) are 'soft'. That is, we can have Vf or Vb exceeding the value defined by (39) or (40), provided that the opposite stream can be fully sent (the amount of data in its Fig. 2 buffer is less than the value defined by the equation (40) or (39), correspondingly). First, we try to put Vf and Vb bytes of requests and responses into the packet. If some buffer does not have enough data to fully fill its Vx packet part, then the data from the opposite buffer can be used to pad the packet to V0 size, provided that there's enough data available in this opposite buffer.
Then, after the packet is sent and its PONG OFC echo returns, we should calculate the actual value of Bi for the Q-algorithm, using the same equation (31) that we use for the packet with size V<V0.
Now that we have the bandwidth reservation requirements (25,26) translated into the packet volume terms (39,40), we can abandon the continuous-traffic assumption and consider the case of the finite-size atomic messages.
In this case the request and the response buffers contain the finite-size messages, which can be either fully placed into the packet, or left in the buffer (for now, we'll continue assuming that there's just one request buffer - the multiple-buffer case will be considered later). The buffers are already prioritized according to the request hop (in case of the hop-layered request buffer) or according to the summary response volume (in case of the response buffer). Thus the packet to be sent might contain several messages from the request buffer head and several messages from the response buffer head (either number can be zero).
Here the 'packet' means a sequence of bytes between two OFC PINGs - the actual TCP/IP packet size might be different if the algorithm presented in section 6.1 (equations (2-4)) splits a single OFC packet into several TCP/IP ones. Again, we can have two situations - when the summary amount of data in both buffers does not exceed the packet size V0 (512 bytes) and when it does.
If both buffers can be fully placed into the packet, there are no differences between this situation and the continuous-traffic space case at all. Since we are fully sending all the available data in one packet, it does not matter whether it is a set of finite-size messages or a continuous-space volume of data - we are not breaking the data into any pieces anyway. So we can just apply the continuous-traffic case reasoning and, as a final step, calculate the Bi for the Q-algorithm using (31).
If, however, the summary amount of data in request and response buffers exceeds V0 and the messages are atomic and have the finite size, typically it would be impossible to achieve the precise forward- and backward-data size values in the packet as defined by (39,40). Thus we have to use the approximate values for the Vf and Vb, so that in the long run (when many packets are sent) the resulting data volume would converge to the desired request/response ratio:
In order to achieve that goal, the 'herringbone stair' algorithm is introduced:
This algorithm defines a way to assemble the sequence of packets from the atomic finite-size messages so that in the long run the volume ratio of request and response data sent on the connection would converge to the ratio defined by (41). Naturally, the algorithm is designed to deal with the situation when the sum of the desired request and response sub-streams exceeds the connection outgoing bandwidth Gi, but it should provide a mechanism to fill the packet even when this is not the case.
In order to do that, an accumulator variable acc with an initial value of zero is associated with a connection. At any moment when we need to place another message into the packet, we choose between two candidates (the first messages in the request and response buffers) in a following way:
For both messages the 'probe' accumulator values (accF for forward-traffic and accB for back-traffic) are calculated:
where Sb and Sf are the sizes of the first messages in the corresponding (response and request) buffers. Then the values of abs(accB) and abs(accF) are compared, and the accumulator with the smaller absolute value wins, replaces the old acc value with its accX value, and puts the message of type 'X' into the packet. This process is repeated until the packet is filled. If at any moment when the choice has to be made, at least one of the buffers is empty and the accB or accF value cannot be calculated, the message from the buffer, which still has the data (if any), is placed into the packet. At the same time the acc variable is set to zero, effectively 'erasing' the previous accumulated data misbalance.
The packet is considered ready to be sent according to the algorithm presented in section 6.1 (equations (2-4)). At that point we exit the packet-filling loop but remember the latest accumulator value acc - we'll start to fill the next packet from this accumulator value, thus achieving the convergence requirement (41).
Graphically this process can be represented by the picture, which looks like this:
Fig. 7. Graphical representation of the 'herringbone stair' algorithm.
The chart in Fig. 7 illustrates the case when both the request and the response buffers have enough messages, so the accumulator does not have to be zeroed, 'dropping' the plot onto the 'ideal', 1/2-tangent line. (This dashed line represents the packet-filling procedure in case of the continuous-space data, when the traffic can be treated as a sequence of the infinitely small chunks). The horizontal thick lines represent the responses, and the line length between markers is proportional to the response message size. Similarly the vertical thick lines represent the requests. The thin lines leading nowhere correspond to the hypothetical, 'probe' accX values, which have lost against the opposite-direction step, since the opposite-direction accumulator absolute value happened to be smaller. Thus every step along the chart in Fig. 7 (moving in the upper right direction) represents the step that was closest to an 'ideal' line with a tangent value of 1/2.
This algorithm has been called the 'herringbone stair algorithm' for an obvious reason - the bigger (losing) accX value probes (thin lines leading nowhere) resemble the pattern left on the snow when one climbs the hill during the cross-country skiing.
So the basic algorithm operation is quite simple. One fine point, which has not been discussed so far, is the fate of the rest of the data in the request or the response buffer after the packet is sent and it could not accept all the data from the corresponding buffer.
In case of the response buffer the situation is clear: the flow control algorithms try not to drop any responses unless absolutely necessary. That is, unless the response storage delay reaches an unacceptable value (see section 4 for the more detailed explanation of what the 'unacceptable delay value' is). If the time spent by the response in buffer does reach an unacceptable timeout limit, the response buffer timeout handler drops such a response, but this is done in a fashion transparent to the packet-filling algorithms described here. No other special actions are required.
The situation with the request buffer is a bit different. This hop-layered buffer was specifically designed to handle a situation when just a small percentage of the requests in this buffer can be sent on the outgoing connection. The idea was that when the GNet has relatively low response traffic and the Q-algorithm passes all the incoming requests to the hop-layered request buffer, since there's no danger of the response overflow, then the GNet scalability is achieved by the RR-algorithm and an OFC block. This block sends only the low-hop requests out, dropping all the rest and effectively limiting the 'request reach' radius regardless of its TTL value and minimizing the connection latency when the GNet is overloaded.
Since on the average, all incoming and outgoing connections carry the same volume of the request traffic, in this situation (when the RR-algorithm and OFC block take care of the GNet scalability issues) the average percentage of the dropped requests (taken over the whole GNet) is about
where N is the average number of the GRouter connections. So with N=5 links, it can be expected that on the average just about 20% of the requests in the hop-layered request buffer would be sent out and 80% would be dropped.
In case of the continuous-space traffic, we can just clear the request buffer immediately after the packet is sent. This would bring the worst-case request delay on the GRouter to its minimal value, equal to the interval between the packet-sending operations. Unfortunately this is not always possible in the finite-size message case. The reason for this is that when the requests are infinitely small, we can expect the following request buffer layout when we are ready to begin assembling the outgoing packet:
Fig. 8. Hop-layered request buffer layout in the continuous traffic case.
Here the buffer contains a very large number of the very small requests, and statistically the requests with every possible hop value would be present. So every time the packet is sent, it would contain all the data with low hops and would not include the buffer tail - the requests with a biggest hop value would be dropped. What is important here is that from the statistical standpoint, it is a virtual certainty that all the requests with very low hop values (0,1,2,...) are going to be sent.
To appreciate the importance of that fact, let us consider the 'GNet leaf' presented in Fig. 5. The 'leaf' servents A, B, C can reach the GNet only through the GRouter. When these servents' requests traverse the 'Connection i' link, they have a hop value of 1. So if the GRouter has the significant probability of dropping the hop=1 requests, it is likely that these servents might never receive any responses from the GNet just because the requests would never reach the GNet in the first place. By the same token, if the GRouter's peer in the GNet (the host on the other side of the 'Connection i') is likely to drop the hop=2 requests, the total response volume arriving back to A, B, C will be decreased. Even if the hosts A, B, C would have other connections to GNet aside from the one to the GRouter, it would still be important to broadcast their requests on the 'Connection i'. Generally speaking, the less is the request hop value, the more important it is to broadcast such a request.
As we move to the finite message size case, we immediately notice two differences: first, the number (though not the total size) of the requests in the hop-layered buffer decreases and the statistical rules might no longer apply. For example, as we start to fill the packet, we might have no requests with hop 0, one request with hop 1, two requests with hop 4 and one request with hop 7. This fact will be important later on, as we move to the multi-source herringbone stair algorithm with several request buffers.
The second difference, which is more important for us here, is that the OFC algorithm might choose to send the packet containing only the responses. Let's have another look at Fig. 7 and imagine ourselves that all the messages there (the thick lines between the markers) are bigger than V0 (512 bytes). Then every such message will be sent as a single OFC packet (and maybe multiple TCP/IP packets), which would consist of this big message (request or response) followed by an OFC PING. Essentially, every marker in the Fig. 7 will correspond to the OFC packet sending operation.
Then, if we would clear the request buffer as soon as the response OFC packet is sent, the requests that have arrived since the last packet-sending operation would be dropped and would have precisely zero chance of being sent regardless of their importance in terms of the hop value. In fact, the herringbone stair algorithm can send several 'response-only' packets in a row (see the third 'step' in Fig. 7 - it contains two responses), making it even more probable that the 'important' low-hop request would be lost.
This is why it is important to clear the request buffer only after at least a single request is placed into the packet. The graphical illustration of such an approach is presented in Fig. 9:
Fig. 9. Request buffer clearing algorithm.
This is essentially the plot from the Fig. 7, but with ellipses marking the time intervals during which the incoming requests are just added to the request buffer and nothing is removed from it. The chart assumptions are that first, every message is sent in a single OFC packet, and second, that the physical time associated with the plot marker is the moment when the decision is made to include the message, which begins at the marker, into the packet to be sent. That is, the very first marker (at the lower left plot corner) is when the decision is done to send the first message - the request that is plotted as a vertical line on the chart. The small circle surrounding that first marker means that at this point we can clear the request buffer, removing all the other requests from it.
Then we send a response (a horizontal line), but do not clear the request buffer, since we would risk losing the important requests that could arrive there in the meantime. The request buffer is cleared again only after the herringbone stair algorithm decides to send a request and places this request into the packet (the beginning of the second vertical line). Then the request buffer can be reset again, and the ellipse, which covers the whole first 'step' of the 'stair' in the plot, shows the period during which the incoming requests were being accumulated in the request buffer. At the end of the horizontal line (when the new packet can be sent), all the requests accumulated during the time covered by the ellipse start competing for the place in the packet, and the process goes on with the request accumulation periods represented by the ellipses on the chart.
Note that the big ellipse that covers the third 'step' of the 'stair' is essentially a result of the big third request being sent. If the packet roundtrip time is proportional to the packet size, this ellipse might introduce a significant latency into the request-broadcasting process - the next request to be sent might spend a long time in the buffer. Unless the GNet protocol is changed to allow the non-atomic message sending, such situations cannot be fully avoided. On one hand, the third request was obviously important enough to be included into the packet, and on the other hand, the bandwidth reservation requirements do not allow us to decrease the average bandwidth allocated for the responses, and to send the next request sooner. But at least the 'herringbone stair' and the request buffer clearing algorithms make sure that the important low-hop requests have the fair high chance to be sent within the latency limits defined by the current bandwidth constraints.
Since the finite-size messages can lead to the OFC packets with size exceeding V0 (512 bytes), it might be that we'll have to use equation (14) instead of (13) to evaluate the bandwidth Bi if V>V0. So instead of equation (31) for Bi (as it was the case for the continuous-space traffic), the 'herringbone stair' algorithm uses the following equations to evaluate the bandwidth Bi reserved for the back-traffic:
where V is the OFC packet size produced by the 'herringbone stair' algorithm.
Finally, it should be noted that even when the request buffer clearing algorithm does allow us to remove all the requests from the buffer, this operation should not be performed unless the reset timeout Tr time (~200 ms) has passed since the last buffer-clearing operation. This timeout is logically similar to the G-Nagle algorithm timeout introduced previously - its goal is to handle the case when the big packets are sent very frequently on the low-roundtrip-time links. Then the fact that the requests are kept in buffer for 200 ms does not noticeably increase the response latency, but might improve the request buffer layout from the statistical standpoint, bringing it closer to the continuous-space layout presented in Fig. 8.
Now that we have fully described the 'herringbone stair' algorithm in case of the single request buffer, we can move to the effects introduced by the presence of the multiple GRouter connections and hop-layered request buffers.
When the GRouter connection has multiple request buffers (that is, the GRouter has more than two connections), the basic principles of the packet-filling operations remain the same. The bandwidth still has to be shared between the requests and the responses, the 'herringbone stair' algorithm still plots the 'stair' pattern if there's not enough bandwidth to send all the data - the difference is that now the requests have to be taken from several buffers. This is the job of the hop-layered round-robin algorithm introduced in [1] ('RR-algorithm' block in Fig. 2.)
The RR-algorithm essentially prioritizes the 'head' (highest priority, low-hop) requests from several buffers, presenting a 'herringbone stair' algorithm with a single 'best' request to be compared against the response. The reasoning behind the round-robin algorithm design was described in [1]; here we just provide a description of its operational principles with an emphasis on the finite request size case.
The hop-layered round-robin algorithm operation is illustrated by Fig. 10:
Fig. 10. Hop-layered round-robin algorithm.
The algorithm queries all the hop-layered connection buffers in a round-robin fashion and passes the requests to the 'herringbone stair' algorithm. Two issues are important:
The current maximal and minimal hopDataCount values for all buffers maxHopDataCount and minHopDataCount are maintained by the RR-algorithm. The request is transferred from the buffer by the RR-algorithm only if this buffer's hopDataCount satisfies the following condition:
If this condition is not fulfilled, the buffer is just skipped and the RR-algorithm moves on to the next buffer. This prevents the buffers with large requests from monopolizing the outgoing request traffic sub-band, which would be possible if the requests would be transferred from buffers in a strictly round-robin fashion.
When the RR-algorithm is used (that is, there is more than one request buffer), the herringbone stair algorithm has to make a choice as to when it should clear all the requests from these several request buffers.
This decision is influenced by pretty much the same considerations as the similar decision in case of the single request buffer (which is described in section 7.3):
So the buffer-clearing algorithm presented in section 7.3 is extended for the multiple-buffer situation. The decision to reset the buffers' contents is done for each buffer individually and the buffer can be cleared no sooner than some request from this buffer is included into the outgoing packet by the 'herringbone stair' algorithm.
Of course, this approach might increase the interval between the buffer resets. For example, if some buffer contains a just a single high-hop request, this request can spend a lot of time in the buffer - until some low-hop request arrives there, or until no other buffer would contain the requests with lower hop values. But this is not a big problem - we are mainly concerned with the low-hop requests' latency, since these are the requests, which are typically passed through by the RR- and 'herringbone stair' algorithms. Even if this high-hop request spends a lot of time in its request buffer before being sent, in practice that would most probably mean that multiple other copies of this request would travel along the other GNet routes with little delay. So the delayed responses to that request copy would make just a small percentage of all responses (even if such a request is not dropped), having little effect on the average response latency.
The Q-algorithm [1] goal is to make sure that the response flow would not overload the connection outgoing bandwidth, so it limits the request broadcast to achieve this goal, if necessary. Now let us consider the effects that the messages of the finite size are going to have on the Q-algorithm. We are going to have a look at two separate and unrelated issues: Q-algorithm latency and response/request ratio calculations.
The Q-algorithm output is defined by the equation (1) or (52) (Eq. (13) in [1]). This equation essentially defines the percentage of the forward-traffic (requests) to be passed further by the Q-algorithm to be broadcast. When the requests have the finite size, the continuous-space Q-algorithm output x has to be approximated by the discrete request-passing and request-dropping decisions in order to achieve the same averaged broadcast rate. When the full broadcast is expected to result in the response traffic that would be too high for the connection to handle, only the low-hop requests are supposed to be broadcast by the Q-algorithm. The high-hop requests are to be dropped. Essentially, the Q-algorithm is responsible for the GNet flow control and scalability issues when the response traffic is high - pretty much as the RR-algorithm and the OFC block are responsible for the GNet scalability when the response traffic is low.
This task is similar to the one performed by the OFC block algorithms described in section 7, which achieve the averaging goal (41) for the packet layout. So the similar algorithms could achieve the Q-algorithm averaging goals. However, it is easy to see that the algorithms described in section 7 require some buffering - in order to compare the different-hop requests, the hop-layered request buffers were introduced, and these buffers are being reset only after certain conditions are satisfied. These buffers necessarily introduce some additional latency into the GRouter data flow, and an attempt to utilize similar algorithms to achieve the Q-algorithm output averaging would also result in the additional data transfer latency for the GRouter.
Thus a different approach is suggested here. Since the fairness block algorithms already use the request buffers, it makes sense to utilize these same buffers to control the request broadcast rate according to the Q-algorithm output. This is possible since both OFC block and Q-algorithm use the same 'hop value' criteria to determine which requests are to be sent out and which are to be dropped. So if the 'Q-block' is added to the RR-algorithm, such a combined algorithm can use the same buffers to achieve the finite-message averaging for both OFC block and Q-algorithm at once. Then the Q-algorithm does not add any additional latency to the GRouter data flow, and its output just controls the Q-block of the RR-algorithm that performs the request rating, comparison and data flow averaging for both purposes.
In order to achieve that, every request arriving to the Q-algorithm is passed to the Request broadcaster (Fig. 2) - no requests are dropped by the Q-algorithm itself. However, before the request is passed to the Request broadcaster, it is assigned a 'desired number of bytes' (desiredBroadcastBytes) value. This is the floating-point number that tells how many bytes out of this request's actual size the Q-algorithm would want to broadcast, if it would be possible to broadcast just a part of the request. Naturally, desiredBroadcastBytes cannot be higher than the request size (since the Q-algorithm output is limited by 100% of the incoming request traffic).
After that all the request copies are placed into the hop-layered request buffers of the other connections, so that their desiredBroadcastBytes values can be analyzed by the Q-blocks of the RR-algorithms on these connections. The Q-block starts to work when the packet assembly is being started. It goes through the request buffers and calculates the 'Q-volume' for every buffer - the amount of buffer data that the Q-algorithm would want to see sent out.
The RR-algorithm and the Q-block maintain the buffer Q-volume value in a cooperative fashion. The initial buffer Q-volume value is zero. When the new request is added to the buffer, the Q-block adds the request desiredBroadcastBytes value to the buffer's Q-volume. After the request buffer is sorted according to the hop-values of the requests, only the requests that are fully within the Q-volume part of the buffer are available for the RR-algorithm to be placed into the packet or to be dropped when RR-algorithm clears the request buffer. This buffer layout can be illustrated by the Fig. 11:
Fig. 11. Request buffer Q-volume and data available to the RR-algorithm.
Only the requests that fully fit within the Q-volume have a chance to be sent out (are available to the RR-algorithm). When the request is removed from the buffer by the RR-algorithm, the buffer's Q-volume is decreased by the full size of this request. Similarly, when the multi-source herringbone stair algorithm clears the request buffer contents, it clears all the requests available to the RR-algorithm, decreasing the buffer's Q-volume correspondingly.
Thus after the RR-algorithm resets the request buffer, the requests available to the RR-algorithm (the gray ones in Fig. 11) are going to be removed from the buffer. The resulting buffer Q-volume value will be the difference between the original Q-volume value and the size of the buffer available to the RR-algorithm:
This remaining Q-volume value is called 'Q-credit', since it is used as the starting point for the Q-volume calculation when the Q-block of the RR-algorithm is invoked for the next time. It allows us to 'average' the discrete message-passing decisions, approximating the continuous-space Q-algorithm output over time.
Theoretically, the requests left in buffer after the RR-algorithm clears the requests available to it, (the white ones in Fig. 11) could be left in buffer and have a chance to be sent later. For example, if the first 'white' request in Fig. 11 (the one that has the Q-volume boundary on it) has a relatively low hop value, it could be sent out in the next OFC packet if the newly arriving requests would have the higher hop values.
In practice, however, this would result in the increased GRouter latency - such requests would spend more time in the buffer than the interval between the request buffer clearing operations. Since this is something we were trying to avoid in the first place, these requests are removed from the buffer, too - the GRouter latency minimization is considered to be more important than the better statistical layout of the data sent by the GRouter. So since we assume that the buffering requirements (intervals between buffer resets) defined by the multi-source herringbone stair algorithm (section 7.4) are enough for our purposes, we remove these requests as the buffer is cleared, too. When these requests are removed, the buffer Q-volume is not changed, so after the buffer is cleared we have an empty buffer with a Q-volume defined by the equation (48).
The Q-credit value is on the same order of magnitude as the average message size. In fact, if the Q-credit is large, the buffer Q-volume can be bigger than the whole buffer size. This does not change anything - the difference between the Q-volume and the buffer size available to RR-algorithm is still carried as the Q-credit to the next Q-block pass.
Which brings us to an interesting possibility. Let's say the very large-size request leaves a large Q-credit after the buffer is cleared, and at the same time the average request size becomes small and the incoming request traffic f drops significantly - for example, this can happen when the large-message DoS attack has stopped. Then, regardless of the current Q-algorithm output, it can take us a while until we throttle down the sending operations since we are going to fully send the amount of data equal to this Q-credit value first, and act according to the Q-algorithm output (x/f value) only after that.
In order to avoid that, the Q-credit left after the buffer reset is exponentially decreased over time with the characteristic time tauAv equal to the characteristic time (56) (Eq. (15), [1]) of the Q-algorithm that supplies the data to this request buffer:
This guarantees that regardless of the instant Q-credit size due to an abnormally large request, its value will drop to 'normal' in a time comparable to the Q-algorithm characteristic time, so that the Q-algorithm would retain its traffic-controlling properties.
Q-algorithm [1] can be presented as the following set of equations:
Here the variables are:
The variables Q, u, x, Rav, Bav, bAv and tauAv are found from the equations (50-56), and the variables B, b, f, R and tauRtt are supplied as an input.
Furthermore, since equations (50) and (53-55) are the differential equations for the variables Q, Rav, bAv and Bav correspondingly, the system (50-56) requires the initial values for these variables. These initial values are set to zero as the calculations start. As a result, formally speaking, the equation (52) has the zero value for the Rav in the denominator on the first steps, which makes the computation of (52) impossible. In order to resolve that issue, let us notice that as the calculations are started at time t=0, the functions Q(t) and Rav(t) are going to grow as
correspondingly when the value of t is small enough (t->0).
Since from (51) and (57) it is easy to see that u(t)~O(t), we can disregard the small u(t) in (57), which makes it clear that when t is small, the equation (52) can be written as
If t is so small that t<<tauRtt, the instant back-to-forward ratio R(t) represents just a small share of all responses for the requests issued since t=0 - all responses will take about tauRtt time to arrive. So R(t)->0 as t->0. On the other hand, B(t) is related to the channel bandwidth and is not infinitely small when t->0. Thus the second component in the equation (59) becomes infinitely large as t->0, which makes it possible to write (59) and (52) as
That equation allows us to fully calculate the Q-algorithm output when we just start the calculations and Rav still has its initial value of Rav=0. Simply speaking, that means that when we have not seen any responses yet, we should fully broadcast all the incoming requests f, since we have no way to estimate the response traffic resulting from these requests.
Now let's have a look at the Q-algorithm input variables B, b, f, R and tauRtt.
The back-traffic bandwidth B (B=Bi, where Bi is defined in Section 7) is supplied to the Q-algorithm by the RR-algorithm and OFC block (see sections 6-7, Eq. (13,14), (31) and (45,46)).
The instant traffic rates b and f are directly observable on the connection and can be easily measured. Note that the request traffic rate f is the rate of the requests' arrival from the Internet to the Incoming traffic-handling block in Fig. 2, whereas b is the rate with which the responses arrive to the Response prioritization block from other connections.
So the missing Q-algorithm inputs are the instant response/request ratio R and delay tauRtt. These variables cannot be observed directly and have to be calculated from the request and response traffic streams f and b.
In the continuous-traffic case the response traffic rate b as a function of time can be presented as
(61) |
Here Rt(t) is the 'true' theoretical response/request ratio - its value determines how much response data would eventually arrive for every byte of the request broadcast x. The function r(tau) describes the response delay distribution over time - this normalized function (its integral from zero to infinity is equal to 1) defines the share of responses that are caused by the requests that were broadcast tau seconds ago.
Naturally, both Rt(t) and r(tau) are not known to us and can change rapidly over time. Actually, r(tau) function in (61) should be properly written as r(t-tau, tau) to show that the delay distribution varies over time - the first argument t-tau is omitted in (61) in order to make the physical meaning of that equation more clear.
We cannot predict the future responses, so we do not know the value of the function Rt(t) and the shape of the function r(tau)=r(t, tau) at any given moment t - the behavior of the responses that will arrive at the future moments t+tau is not known to us. All we can do is extrapolate the past behavior of these functions. Thus we can define the Q-algorithm input R(t) as:
(62) |
The equation (62) describes the past behavior of the GNet in an answer to the requests and does not require any knowledge about its future behavior. All the data samples required by (62) are from the times preceding t, so it is always possible to calculate the instant values for R(t).
The practical steps required to calculate R(t) as defined in (62) are presented below.
The instant response/request ratio R(t) is defined by the equation (62). The 'true' theoretical response/request ratio Rt(t) defines how many bytes would eventually arrive in response to every byte of requests sent out at time t. The 'delay function' r(t, tau) defines the delay distribution for the requests sent at time t; this function is normalized - its integral from zero to infinity equals 1.
When these functions are multiplied, the result describes both how much and with what delay tau the response data arrives for the requests sent at time t. In the continuous traffic case this resulting response distribution function might look like the one in Fig. 12:
Fig. 12. The response distribution over time (continuous traffic case).
This sample chart shows the product of two continuous functions: the bell-shaped delay function r(tau)=r(t,tau) and the slowly changing true return rate Rt(t). Note that these two functions are presented separately only for the clarity - in real life we almost never can be sure that there won't be any more responses for the request sent at time t, so the precise separate values for R(t) and for r(t, tau) can be found only postfactum, long after the request sending time t. Rt(t)*r(t, tau), however, has no such limitation, and as soon as the current time exceeds t+tau, we have all the information needed to calculate this product on the interval [0, tau].
Essentially the equation (62) defines the latest available estimate for the response/request ratio, using the most recent responses. If we plot its integration trajectory in the same (tau, t) space that is shown in Fig. 12, it will look like a straight line with a -45 degree angle that starts at the current time t and delay tau=0:
Fig. 13. Equation (62) integration trajectory in (tau, t) space.
This trajectory represents the latest available values for the Rt(t-tau)*r(t-tau,tau) product - the delayed responses that have arrived exactly at the moment t. This can be thought of as a cross-section of the plot in Fig. 12 with the vertical plane defined by the trajectory in Fig. 13.
In the real-life discrete traffic case, however, the calculation of (62) becomes more complicated. The requests and responses are not sent and received continuously as the infinitely small chunks - all networking operations are performed at the discrete time intervals and involve the finite number of bytes.
If we would plot a real-life discrete traffic response distribution in a same fashion as we did in Fig. 12, we would see a mostly zero plot of Rt(t)*r(t, tau) with the finite number of the infinitely high and infinitely thin peaks (delta-functions). Each such peak at the point (tau,t) would represent a response that has arrived after the delay tau for the request sent at time t. Of course, the infinitely high and infinitely thin peaks are just a convenient mathematical abstraction - their meaning is that when the packet arrives, it happens instantly from the application standpoint, so the instant receiving rate is infinite and the integral of this peak is equal to the packet size in bytes.
The sample distribution of such peaks in the same (tau, t) space as in Fig. 13 is shown in Fig. 14:
Fig. 14. Sample Rt(t)*r(t, tau) peak distribution in (tau, t) space in the discrete traffic case.
On this chart the thin horizontal lines are the 'request trajectories'. These lines start at the tau=0 value when the individual requests are sent at the moment t and continue growing as the time goes on. The black marks on the request trajectories represent the individual delayed responses to these requests. The upper right corner of the chart (above the current latest response line) is empty - only the responses received so far are shown on the chart in order to simulate the realistic situation of R(t) being calculated in real time.
The plot in Fig. 14 clearly shows the difficulty of calculating R(t) in the discrete traffic case: unlike the theoretical continuous-traffic plot in Fig. 12, the integration in equation (62) has to be performed along the trajectory that typically does not have even a single non-zero value of the Rt(t-tau)*r(t-tau, tau) product on it. Even when the R(t) calculation is performed exactly at the moment of some response arrival, the integration trajectory still has just a few non-zero points in it, leaving most of the request trajectories (horizontal lines) outside the integration scope.
The reason for this seeming difficulty is that at any current time tc the only samples of the Rt(t)*r(t, tau) product are the ones available at the moments tj, where tj is the time when the request j has been forwarded to other connections for broadcast. At these times the value of Rt(tj)*r(tj, tau) is defined and available for all delay values of tau not exceeding tc-tj - it is zero most of the time and is a delta-function with some weighting coefficient otherwise. However, at all other times t!=tj the value of the Rt(t)*r(t, tau) product is unavailable. That does not mean that it does not exist, but rather that it is not directly observable. If some request would be broadcast at that time t, that fact would define the value of Rt(t)*r(t, tau) product along this request trajectory.
So the integration suggested by the plot in Fig. 14 has a logical flaw - it attempts to perform an operation (62) designed for the function that is defined everywhere on the (tau,t) plane, using the function that is defined only along the finite number of lines t=tj instead. In order to perform this operation in a correct fashion we need to make the Rt(t)*r(t, tau) product value available not only at the points (tau, t) that correspond to the 'request trajectories', but at all other points too. Given the amount of information we have from observing the GRouter traffic, an only feasible way of achieving that is the interpolation. We have to define this function for all times t!=tj when it is not directly observable, using just the information from times t=tj.
In order to do that, we can act as if the requests and responses are not sent and received instantly, but gradually with finite transfer rates defined as the message sizes divided by the interval between the requests. Then the request with the size Vfj is not sent instantly at the moment tj, but gradually with a finite rate x[tj, tj+1[= Vfj/(tj+1-tj) defined on the whole interval [tj, tj+1[ (note that the time tj+1 is not included into the interval - the x(tj+1) value is defined by the next request size). Thus the whole range of t is covered by these intervals and x(t) becomes non-zero everywhere. Let us use the index i to mark the responses to the individual request j. Since the response i to the request j is received with the delay tauij, this response will be also delivered gradually over the [tj+tauij, tj+1+tauij[ interval, and if the response size is Vbij, the effective data transfer rate for this response will be bij[tj+tauij, tj+1+tauij[=Vbij/(tj+1-tj).
This traffic-'smoothening' operation preserves the integral characteristics of the data transfers, and defines the Rt(t)*r(t, tau) product for all values of t - not only for t=tj, allowing us to transform the plot in Fig. 14 into the one shown in Fig. 15:
Fig. 15. Rt(t)*r(t, tau) value interpolation and integration in the discrete traffic case.
The vertical arrows in Fig. 15 represent the non-zero values of the Rt(t)*r(t, tau) product and cover the interval [tj, tj+1[ from the request sending time tj up to but not including the next request sending time tj+1. When t=tj+1, the new request data is used. These non-zero values are actually the delta-functions of tau with the magnitude defined by the fact that these delta-functions are supposed to convert the request sending rate x(t) into the response receiving rate b(t) according to the equation (61).
We have already seen that the response i to the request j effectively increases the response rate on the [tj+tauij, tj+1+tauij[ interval by Vbij/(tj+1-tj) , and that this increase is caused by the request with rate Vfj/(tj+1-tj) on the interval [tj, tj+1[. In terms of the equation (61), this additional response rate is caused by the Rt(t-tauij)*r(t-tauij, tauij) product multiplied by the x(t-tauij) (equal to Vfj/(tj+1-tj)) and by the infinitely small value dtau, so we can write this response rate increment as
or
This allows us to write the Rt(t)*r(t, tauij) product value on the [tj, tj+1[ interval as
where delta(tau-tauij) is a function which is infinite with an integral of 1 when tau=tauij and zero when tau!=tauij.
Equation (65) makes it possible to calculate the R(t) as defined in (62) in the discrete traffic case. The continuous-space integral (62) becomes the sum, which components correspond to the non-zero points on the integration trajectory. In Fig. 15 these non-zero points can be easily seen as the vertical arrows that cross the integration trajectory. Note also that since several requests can be forwarded for broadcast at the same sending time tj, this group of requests is considered a single request j from the interpolation standpoint. All the replies to this group of requests are considered to be the replies to the request j.
However, even though this straightforward approach to the R(t) computation is possible in principle, it is rather complicated in implementation and might lead to the various Q-algorithm computational errors and decreased code performance. The main problem with this integration method is that it does not take into consideration the reason for the R(t) computation, which is the subsequent exponential averaging (53) and using the resulting Rav value as the Q-algorithm input. Equation (62) allows us to calculate the value of R(t) at any random moment t, which is first, not necessary (ultimately we need only the averaged value Rav for the Q-algorithm), and second, results in a noisy and imprecise R(t) function. In fact, it can be shown that when the time scale is discrete (as it normally is in any computer system), the integration approach illustrated in Fig. 15 leads to a systematic error proportional to the operating system 'time quantum' - the precision of the built-in computer clock.
The Q-algorithm equation (53) requires R(t) that would correctly reflect all the response data arriving within the Q-algorithm time step Tq. The integration presented in Figs. 13-15 effectively counts only the very latest responses; if the Q-algorithm step time is big enough, many of the responses won't be factored into the R(t) calculation as defined in (62), which might be a source of the Rav (and Q-algorithm) errors.
So we need R(t) to be not an 'instant' response/request ratio at time t, but rather some 'average' value on the [t-Tq,t] interval, and this 'real-life' R(t) should be related to the Q-algorithm step size Tq, factoring all the responses arriving on this interval into the calculation. In order to do that, we can define the Q-algorithm input R at the current time tc as R(tc, Tq), which is the average value of R(t) integral (62) on the Q-algorithm step interval [tc-Tq, tc]:
(66) |
This integration approach is illustrated in Fig. 16.
Fig. 16. Rt(t)*r(t, tau) integration tied to the Q-algorithm step size.
Here the same response pattern as in Fig. 14 and Fig. 15 is presented together with the Q-algorithm step size Tq. Instead of calculating the value of R(t) as suggested by Fig. 15 and equation (62), here all the responses that have the 'interpolation arrows' inside the two-dimensional integration area (shaded area in Fig. 16) are included into the equation. After the two-dimensional integral is calculated, it is divided by Tq to compute R(t, Tq).
It is important to realize that the integration approaches suggested in Fig. 15 (equation (62)) and Fig. 16 (equation (66)) become identical when the Q-algorithm step size Tq->0. We are not introducing a new definition for R(t) here - we just present the discrete Q-algorithm time case approximation of the same basic function, which in the continuous Q-algorithm time case is defined by the integration along the trajectory shown in Figs. 13-15 (equation (62)). The two-dimensional integration presented in Fig. 16 is necessary because of the finite size of the Q-algorithm step time Tq, and not because of the discrete character of the traffic. Even if the Rt(t)*r(t, tau) product would be similar to the one shown in Fig. 12 and the data would be sent and received continuously in the infinitely small chunks, the two-dimensional integral (66) would still be necessary when Tq>0.
The discrete (finite message size) traffic, however, is the cause of the delta-function appearance in the equation (65) and of the finite-length 'interpolation arrows' in Figs. 15 and 16. So the practical computation of (66) in the discrete traffic case involves the finite number of responses - the ones that have the 'interpolation arrows' at least partly within the shaded integration area in Fig. 16. The value of every sum component is proportional to Vbij/Vfj (see (65)) and to the length of the 'interpolation arrow' segment within the integration area.
Fig. 16 makes it is easy to see that the response 'interpolation arrow' crosses the integration trajectory only if this response arrival time tj + tauij is more recent than the current time t minus the Q-algorithm step size Tq and minus the request interval tj+1-tj. So the non-zero components of the sum that replaces (66) in the discrete traffic case must satisfy the condition
Introducing the 'response age' variable aij=t-(tj+tauij), we can write this as
(68) | aij < Tq + (tj+1 - tj), | if j is not the last request sent out |
(69) | aij >= 0, | if j is the last request sent out (all its responses are counted). |
These conditions mean that only the relatively recent responses should participate in the R(t) calculation, and the maximal age of such responses should be calculated individually for every request.
Defining the length of the 'interpolation arrow' part that is within the integration area as Sij=Sij(t,Tq) (it is written here as a function of t and Tq to underscore that for every response this value depends on time and on the Q-algorithm step size), from (65) and (66) we can find R(t, Tq) as:
(70) |
It is not difficult to find Sij at any given moment t, so the equation (70) can be actually implemented, giving the correct R value for the Q-algorithm equation (53).
In practice, however, it is not very convenient to use the equation (70). From Fig. 16 it is clear that this sum contains not only the components related to the responses that have arrived during the last Q-algorithm step Tq, but also the components related to the responses received before that. So the responses' parameters (size and arrival time) have to be stored in some lists until the corresponding response ages exceed the age limit (68). On every Q-algorithm step these lists have to be traversed to determine the old responses to be removed, then the new Sij parameters have to be found for the remaining responses and only after that the sum (70) can be found.
This whole process is complicated and time-consuming, so it might be desirable to optimize it. In order to do that, let us notice that as the Tq grows and the relevant 'interpolation arrows' have bigger chance to be fully inside the integration area, the average Sij value approaches tj+1-tj. And in any case, the 'interpolation arrow' of every response is going to be eventually 'fully covered' by the integration (66) on some Q-algorithm step. Since there are no time gaps between the Q-algorithm steps, the integration areas similar to the one in Fig. 16 cover the whole tau>0 space, and every point on every 'arrow' is going to belong to exactly one Sij(t,Tq) interval.
Further, the equations (66) and (70) were designed to average the 'instant' value of R(t) defined by the equation (62) over the Q-algorithm step time Tq, and for every two successive Q-algorithm steps Tq1 and Tq2,
which means that the R value for the bigger Q-algorithm step can be found as a weighted average of the R values for the smaller steps. Let us consider the model situation when there is a single response Vbij and its 'interpolation arrow' falls into two Q-algorithm steps - Tq1 and Tq2, as shown in Fig. 17.
Fig. 17. Single response interpolation within two Q-algorithm steps.
Here the response 'arrow' is split into two parts Sij(t, Tq2) and Sij(t - Tq2, Tq1), so
In this case the R values for these two Q-algorithm steps Tq1 and Tq2 calculated with the equation (70) are:
(73) | R(t, Tq2) = (Vbij / Vfj) * Sij(t, Tq2) / Tq2, | and |
(74) | R(t - Tq2, Tq1) = (Vbij / Vfj) * Sij(t - Tq2, Tq1) / Tq1. | |
The R value for the compound step Tq1+Tq2 is
Using (72), we can present (75) as
meaning that as the R value is being averaged over time, it does not really matter whether the response is being counted in the sum (70) precisely (according to the Sij value), or the response is just assigned to the Q-algorithm step where it was received. For example, if we simplify the R calculation and compute the R values on the two Q-algorithm steps above as
the averaged R value on these two steps will be
which is identical to (76). So even though the equations (77) and (78) give us the non-precise values of the integral (66) on two individual Q-algorithm steps Tq1 and Tq2, it is a very short-term error. The averaged R value on the compound interval Tq1+Tq2 defined by (79) is exactly the one defined by the averaging of the precise R values calculated in (73) and (74).
Now, since the R value is used by the Q-algorithm only as an input to the equation (53) that exponentially averages it with the characteristic time tauAv, we can disregard the short-term irregularities in R and replace the equation (70) by the following optimized equation:
(80) |
Even though the equation (80) is less precise than the equation (70), its precision is sufficient for our purposes when tauAv > tj+1 - tj. At the same time the implementation of the equation (80) is much simpler, requiring less memory and CPU cycles. Only the responses arriving within the latest Q-algorithm step time have to be counted, the complicated Sij calculations do not have to be performed on every Q-algorithm step, and the memory requirements are minimal. Nothing has to be stored on 'per response' basis, and for every request in the routing table, just the value of the (tj+1-tj)/Vfj ratio has to be remembered. Then every arriving response Vbij should increase the sum in the equation (80). When the Q-algorithm step is actually done, this sum should be divided by Tq to calculate R and zeroed immediately after that to prepare for the next Q-algorithm step. This approach also makes it possible to 'spread' the calculations more evenly over the Q-algorithm time step Tq instead of performing all the computations at once, as it would be the case with the equation (70).
Of course, the last request sent out should still be treated in a special way - the next request sending time tj+1 is unavailable for it, so all its responses should be added to the sum (80) when the Q-algorithm step is actually performed. The current time t should be used instead of tj+1 in the equation (80) for this request, since (t-tj)/Vfj would provide the best current estimate of the 1/x(t) value at this point instead of (tj+1-tj)/Vfj that is used as the 1/x([tj, tj+1[) estimate for all other (previous) requests.
The instant delay value tauRtt(t) is the measure of how long does it take for the responses to the request to arrive. The word 'instant' here does not imply that the responses arrive instantly - it just means that this function provides an instant 'snapshot' of the delays observed at the current time t.
Logically this function is a weighted average value of the observed response delays tau. 'Weighted' here means that the more is the amount of data in the responses with the delay tau, the bigger influence should this delay value have on the value of tauRtt(t). This is similar to the way the instant response ratio is calculated in (62), so in principle Rt might be just replaced by tau in that equation, leading us to the following equation for tauRtt(t):
(81) |
Unfortunately the previous section (8.2.1) shows that in practice the function r(t, tau) cannot be known to us - we can never be sure that all the responses for some particular request have already arrived, and these future delayed responses might affect the past values of r(t, tau). This happens because by definition the function r(t, tau) is normalized - the integral of r(t, tau)*dtau from zero to infinity is 1. In real-life situations at any current time t we do not see the full response pattern for the request j sent at time tj, but are limited to the requests that have arrived with the delay less or equal to tau=t-tj. The normalization requirement means that any new responses arriving after that will change the past values of r(tj, tau) too, even though the responses that form this function at the values of tau<t-tj have been already received.
Besides, the equation (81) uses the same integration trajectory as the equation (62) - the one shown in Fig. 13. So even if we would somehow know the precise values of the r(t, tau) function, the integral of r(t-tau, tau)*dtau along this trajectory would not be equal to 1 anyway - the function r(t, tau) is normalized only for the horizontal integration trajectories t=const in the (tau, t) space. Thus the direct calculation of (81) would give us the wrong value of tauRtt when r(t, tau) changes with t, as it normally does.
So what we need is some practically feasible and properly normalized way to average the response delay tau. This amounts to a requirement to have some function to replace r(t-tau, tau) in (81). The proposed solution is to use the Rt(t-tau)*r(t-tau, tau) product for this purpose.
As an averaging multiplier for tau, this function has some very attractive properties: first, its calculation does not require any knowledge about the future data, which means that the future responses won't change the values that we already have.
Second, this function is pretty close to the r(t-tau, tau), differing only by the true response/request ratio value Rt, and it can be argued that this multiplier actually makes sense from the averaging standpoint. For example, the requests with many responses would have stronger influence on the tauRtt, meaning that generally tauRtt would be closer to the average response time for the requests that provide the bulk of the return traffic.
Third, as long as the function used for the tau averaging instead of r(t-tau, tau) in (81) has some defensible relationship to the response distribution pattern r(t-tau, tau) (as Rt(t-tau)*r(t-tau, tau) product certainly does), it is a matter of the secondary importance, which particular function is used. The tauRtt(t) variations due to the different averaging function choice can be countered by the appropriate choice of the negative feedback coefficient beta for the equations (50) and (53-55), since the value of tauRtt just controls the Q-algorithm convergence rate and does not affect anything else. In fact, even that function of tauRtt is present only when the response bursts with rate b>B are observed. Normally, when there's no response burst and tauRtt is not very big (tauRtt<tauMax), the Q-algorithm convergence speed is limited by the bigger time tauMax anyway, as defined by (56). In practice, being close to r(t-tau, tau), our particular averaging function choice does not require changing beta from its recommended value of 1.0.
And finally, we are calculating the values related to the Rt(t-tau)*r(t-tau, tau) product and its integral anyway when we are calculating R(t) as described in section 8.2.1.
The only unattractive property of Rt(t-tau)*r(t-tau, tau) product as an averaging function is that its integral is not normalized to 1 over the integration trajectory shown in Fig. 13. However, this is easily fixed by explicitly normalizing this product by dividing it by R(t), which is exactly the value of this integral (62) over the integration trajectory in Fig. 13.
So we can present the expression for tauRtt(t) as:
(82) |
Applying the same line of reasoning as the one applied in section 8.2.1 to the similar equation (62), in the discrete traffic case we can replace (82) by a finite sum
(83) |
in the same fashion as we have replaced (62) by its discrete-traffic representation (80). Here the sum components are calculated in a fashion similar to (80) - in fact, both sums (80) and (83) can be calculated in parallel as the responses arrive, and then the value of R(t) from (80) can be used to normalize the sum in (83) to calculate the tauRtt(t) value.
The same last request treatment rules that were described in section 8.2.1 for the equation (80) apply to the equation (83). All responses to this request should be included into the sum (83) and the current time t should be used instead of the next request sending time tj+1.
Naturally, the equation (83) is inapplicable when R(t)=0. Consider the case when on the average there's less than one response per request j (actually, request group j). This situation is particularly likely to arise when the number of requests in the average request group j is small. Then on the average there's likely to be no non-zero response components in (80) and (83), meaning that both R(t) and the sum in (83) would be equal to zero. In that case the previous value of tauRtt should be used. If no previous tauRtt values are available, that means that the connection was just opened and no requests forwarded by it for broadcast to other connections have resulted in the responses yet. Then we cannot estimate R(t) and tauRtt(t), so the initial conditions described in Section 8.2 (equation (60)) should apply to x(t) and tauRtt=0 should be used in (56).
When tauRtt(t) is calculated on the basis of just a few data samples (or even a single data sample), the value of tauRtt(t) might have a big variance. Of course, the same would be also true for the R(t) function, but that function is used by the Q-algorithm only after the averaging over the tauAv time period (equation (53)). The tauRtt(t), on the contrary, is used directly in (56), since it is this value that might be defining the averaging interval for all other equations ((50) and (53-55)), and it might be difficult to average it exponentially in a similar fashion.
Fortunately the value of tauRtt is used only when the long response traffic burst is present or when tauRtt>tauMax (56). Otherwise, the constant value tauMax (56) defines the Q-algorithm convergence rate, so normally tauRtt is not used by the Q-algorithm at all. But even when it is used by the Q-algorithm, it just defines the algorithm convergence speed and if the general numerical integration guidelines presented in Appendix B are observed, the big tauRtt variance should not present a problem.
However, the extremely high variance of tauRtt is still undesirable, so it is recommended to calculate tauRtt on the basis of at least 10 response samples or so, increasing the Tq averaging interval in the equation (83) if necessary. This is made even more important by the fact that the equation (83) is the analog of the optimized approximation (80) for R(t) and not of the precise equation (70), which might lead to the higher variance of tauRtt because of this approximate computation. Thus the bigger averaging interval Tq might be desirable, so that the average interval tj+1-tj between requests would be less than Tq, since tj+1-tj<<Tq is the condition required for the approximate solution (80) to converge to the precise solution (70).
Finally it should be noted that the interaction between the Q-algorithm and the RR-algorithm and OFC block described in section 8.1 makes it very difficult to determine whether the individual request was sent out or not. This information would have to be communicated in a complicated fashion from the RR-algorithms of several connection blocks to the Q-algorithm of the connection block that has received the request. In principle it is possible to do so; however, it is much simpler to consider every request passing through the Q-algorithm 'partially broadcast' with the request size equal to
where Vreq is the actual request message size, x(t)/f(t) is the Q-algorithm output and Vef is the resulting effective request size. The Vfj value to be used in the equations (80) and (83) is defined as
The effective request size Vef is essentially the 'desired number of bytes' to be broadcast from this request as defined in section 8.1 - that's how many request bytes the Q-algorithm would wish to broadcast if it would be possible to broadcast just a part of the request. This value is associated with the request when it is passed to the OFC block. Vfj is the summary desired number of bytes to send on the current Q-algorithm step. This value (or the related (tj+1-tj)/Vfj value) is associated with every request in the routing table and is used in the equations (80) and (83).
Since the actual requests are atomic and can be either sent or discarded, this fact also increases the variance of R(t) and tauRtt(t). For example, all the requests forwarded for broadcast on some Q-algorithm step can be actually dropped and thus have no responses, which would result in the zero response traffic caused by the forward data transfer rate x(t) on this Q-algorithm step. And all the requests forwarded on the next Q-algorithm step might be sent out and cause the response traffic that would be disproportional for this step's x(t).
This underscores the need to compute tauRtt(t) only when many (much more than one) response data samples are available for the equation (83). Unlike R(t) that is averaged by (53), tauRtt(t) is being averaged only by the equation (83) itself, and the additional variance arising from the atomic nature of the requests has to be suppressed when tauRtt is computed.
This section briefly summarizes the algorithms and architectural decisions introduced in the previous sections:
This section does not introduce any new algorithms - it just describes the reasoning behind some architectural concepts presented in the previous sections of this document.
The GRouter block diagram (Fig. 1 in section 3 of this document) shows the 'Connection 0' block as the special 'virtual' connection that is used, among other things, to provide the 'local' responses to the incoming requests. It processes the requests to the servent and sends back the results (if any). The simplest example of the request is the Gnutella file search request - it initiates the search of the local file system or database and returns back the matching filenames (if found) as the search result. Of course, this is not an only imaginable example of the request - it is easy to extend the Gnutella protocol (or to create another one) to deliver the 'general requests', which might be used for many purposes other than the file searching.
The words 'local file system or database' do not necessarily mean that the file system or the database being searched is physically located on the same computer that runs the GRouter. It just means that as far as the other servents are concerned, the GRouter provides an access point to perform searches on that file system or database - the actual physical location of the storage is irrelevant. The 'Connection 0' block de-couples the request processing logic from the message routing one. This might be especially important when the local search API is implemented as a network API and its throughput cannot be considered infinite when compared to the TCP connections' throughput. In that case it is clearly important to have the flow control logic in the local request-processing code in order to avoid overloading the request-processing engine.
Similarly, it is important to limit the number of locally processed requests when the uplink bandwidth is small, and an attempt to answer all the incoming requests might overload the outgoing communication channel.
The decision to make the interface to the local request processing block a regular GRouter connection that obeys all the connection flow control rules described in this document provides a simple and uniform method of handling the requests. It guarantees that regardless of the servent responsiveness to requests, its bandwidth limitations and the rate of the incoming requests, the response rate from the 'local' request-processing block won't overload the servent outgoing bandwidth.
Further, one of the ways to implement the GRouter is to make it a 'pure router' - an application that has no user interface or request-processing capabilities of its own. Then it could use the regular Gnutella client running on the same machine (with a single connection to the GRouter) as an interface to the user or to the local file system. The decision to handle the locally processed requests through the Connection 0 makes the servents functioning as the 'pure routers' (no Connection 0) logically identical to the servents that use the Connection 0 to access the local request-processing block. This fact gives a great deal of flexibility to the GRouter developer and provides a possibility to implement a wide array of local and remote request processing configurations, knowing that the flow control issues are guaranteed to be handled by the GRouter logic regardless of the chosen architectural solution.
However, at the first glance the 'Connection 0' algorithm seems to have one serious drawback. When the local request-processing interface (Connection 0) receives the same requests as all other connections, it is clear that only the requests that have been chosen by the Q-algorithms for broadcast (to be forwarded further on the network) have a chance of being answered locally. The consequence of this is that on the identical-node network the highest-hop requests might always be transmitted over the GNet with no effect and for no apparent reason. If, say, the Q-algorithms on all servents forward all the requests with hop 5 and drop all the requests with hop 6, then only the incoming requests with hop=5 will reach the Connection 0 and produce the responses. These requests will be also broadcast, and reach the peer servents as the requests with hop=6 - only to be dropped at once by the Q-algorithms. This seems illogical - due to the request multiplication these hop=6 requests would probably represent the majority of the request traffic, so why send such requests if there won't be any responses anyway?
To answer that, first of all we need to introduce this identical-node network model in a more detailed fashion. Let us imagine ourselves the infinite network consisting of the identical servents having the same number of connections N+1, connected by the identical links and synchronously sending out the identical one-byte requests with the rate of one request per time step. Further, let's imagine that every request forwarded to the peers by the servent is also locally answered by a Wp-byte response. The fact that the network is an infinite one allows us to view it as the network with no loops (redundant paths) and use an exponential broadcast multiplication rule as a result.
This 'identical servent model' is not very realistic, but can be a useful GNet analysis tool - it is very easy to analyze since the traffic on all connections is identical. It is convenient to write the stable traffic pattern for such a network as a table that shows how much data passes through every network link during a single time step and what is the layout of this data in terms of message hops and ttls. Such a table might look like the Table 1 below:
Forward-traffic (requests) |
Back-traffic (responses). (Wp units). |
|||||||||||||||
Ttl |
*** not relevant *** |
1 |
2 |
3 |
... |
k+1 |
||||||||||
Hop |
0 |
1 |
... |
k-1 |
k |
... |
0 |
0 |
1 |
0 |
1 |
2 |
... |
0 |
... |
k |
Messages |
1 |
N |
... |
N^(k-1) |
N^k |
... |
1 |
N |
N |
N^2 |
N^2 |
N^2 |
... |
N^k |
... |
N^k |
Dropped |
0 |
0 |
... |
0 |
0 |
... |
*** not relevant - full response transmission *** |
|||||||||
Dropped |
0 |
0 |
... |
0 |
0 |
... |
Table 1. The 'identical servent model' traffic layout example.
The hop value in the table starts with zero for the message that traverses its first link and is incremented every time the new link is traversed by the message. The ttl value is applicable only to the responses and shows how many total links have to be traversed by these responses to reach the destination (the requestor host). The link between the GRouter and its local request-processing block is not explicitly shown anywhere in the table. It is assumed to be an infinite-bandwidth link to an infinite-speed request processor, so that the arrival of every request that is accepted for broadcast by the Q-algorithm immediately (on the next simulation step) causes the local response.
Note that thus defined hop and ttl values might be different from the actual protocol implementation binary values - for example, the Gnutella protocol defines ttl in a different and rather complicated way, and the presence of the local request-processing block would complicate the matters even more. Here we try to use the simple and obvious definitions for these table rows, because an attempt to bring the tables below in compliance with the Gnutella protocol binary fields' meanings would make the issue very difficult to understand to anyone not intimately familiar with the Gnutella protocol binary specification.
The table consists of two parts - the forward-traffic part on the left shows the requests with different hop values that travel through each connection on every time step, and the back-traffic part on the right shows the responses with different ttl values that travel through the same connection. The number of per-hop request bytes grows with the hop value as a power of N. The number of response bytes grows in the similar fashion, but allows for the fact that the response to the k-hop request has ttl=k+1 and traverses k+1 network links before reaching its destination. This is why the responses with ttl value of k+1 form k+1 columns - the connection sees the same-ttl responses with different hop values that travel to the nodes separated from that connection by a different number of links.
Two bottom rows in the table represent the requests dropped by the servent Q-algorithm when there are too many responses and by the OFC block if there's an excessive number of requests. The 'Dropped by Q-algorithm' row shows the requests that are being dropped upon arrival and is related to the hop value of the arriving requests. On the contrary, 'Dropped by OFC' row shows the requests that are dropped immediately before being sent, so their hop value is the one of the servent outgoing requests. Table 1 shows the unlimited request propagation case, and no requests are dropped, which explains zero values in these two rows.
In the general case, the number of responses for each hop with ttl=k+1 is equal to the number of requests with hop=k minus the number of requests with this hop value dropped by the Q-algorithm. The requests that are chosen for broadcast by the Q-algorithm are also forwarded to the infinite-bandwidth local request-processing block and cause the responses to be sent back.
The Table 1 shows just the general table layout in case of the infinite request propagation (infinite link bandwidth). Now let us present a similar table for the case when all the hop=k requests are forwarded for broadcast by the Q-algorithm and all the requests with hop=k+1 are dropped. The traffic layout in such a case is shown in Table 2:
Forward-traffic (requests) |
Back-traffic (responses). (Wp units). |
|||||||||||||||
Ttl |
*** not relevant *** |
1 |
2 |
3 |
... |
k+1 |
||||||||||
Hop |
0 |
1 |
... |
k-1 |
k |
k+1 |
0 |
0 |
1 |
0 |
1 |
2 |
... |
0 |
... |
k |
Messages |
1 |
N |
... |
N^(k-1) |
N^k |
N^(k+1) |
1 |
N |
N |
N^2 |
N^2 |
N^2 |
... |
N^k |
... |
N^k |
Dropped |
0 |
0 |
... |
0 |
0 |
N^(k+1) |
*** not relevant - full response transmission *** |
|||||||||
Dropped |
0 |
0 |
... |
0 |
0 |
0 |
Table 2. The 'identical servent model' traffic layout in 'k-hop limit' case.
As it can be seen in Table 2, at some hop value k+1 the infinite request multiplication can cease to be possible. In this particular situation our original assumption was that it is the result of the Q-algorithm operation due to the excessive amount of the response traffic, so Table 2 shows that all the hop=k requests are locally answered and forwarded. When forwarded, the number of the requests is multiplied by N again and appears in the hop=k+1 forward traffic column as N^(k+1). The cell below contains the number of messages dropped on every time step by the Q-algorithm. This number is also equal to N^(k+1), meaning that these requests are dropped by the Q-algorithm immediately upon arrival.
"Dropped by OFC" line is empty (contains just zeroes), which is natural - this scenario assumes that all the dropped requests are dropped by the Q-algorithm. This essentially means that the forward-traffic bandwidth is not a limiting factor, even though with N=4 about 75% of the forward traffic is represented by the useless hop=k+1 requests. The requests are dropped by the Q-algorithm only, since the responses obviously have trouble fitting into the bandwidth reserved for the back-traffic (Wp is large enough). Section 7.1 introduces the constant relationship between the total link bandwidth and the bandwidth reserved for the back-traffic, so even if we would not send a single request with hop=k+1, it would not result in any response traffic increase.
This fact is important: let us imagine that somehow we would design an 'ideal' algorithm for request propagation, defining it as the algorithm that maximizes the individual request reach (the number of hosts that can answer that request). Regardless of this 'ideal' algorithm operation, it would not give us a better reach than the one we already have anyway - the response bandwidth is a limiting factor here, and no algorithm would allow us to receive more responses from any additional hosts.
Now let us consider an opposite situation, when the OFC block limits the request propagation. That is, the Q-algorithm allows us to forward every request (which also means that every arriving request is answered by the local request-processing block if it has an infinite processing bandwidth), but some of the requests are dropped by the "RR-algorithm & OFC block" (Fig. 2) from the request buffers.
This situation might arise when the value of Wp is low enough and the response traffic is not the limiting factor in the request propagation - the request reach is limited by the forward-traffic channel bandwidth.
Since the OFC block drops the requests in the hop-layered fashion, the traffic layout presented in Table 3 could describe that situation:
Forward-traffic (requests) |
Back-traffic (responses). (Wp units). |
|||||||||||||||
Ttl |
*** not relevant *** |
1 |
2 |
3 |
... |
k+1 |
||||||||||
Hop |
0 |
1 |
... |
k-1 |
k |
k+1 |
0 |
0 |
1 |
0 |
1 |
2 |
... |
0 |
... |
k |
Messages |
1 |
N |
... |
N^(k-1) |
x |
0 |
1 |
N |
N |
N^2 |
N^2 |
N^2 |
... |
x |
... |
x |
Dropped |
0 |
0 |
... |
0 |
0 |
0 |
*** not relevant - full response transmission *** |
|||||||||
Dropped |
0 |
0 |
... |
0 |
N^k - x |
N*x |
Table 3. The 'OFC-limited' case traffic layout.
Here the OFC algorithm detects the forward-traffic bandwidth overflow when x requests with hop=k per time step are being sent over the link. All the rest of the hop=k requests (N^k-x per time step) are dropped, as shown in the bottom line of the k-hop column. x requests per step reach the peers and since the Q-algorithm does not mind forwarding these for broadcast (by our assumptions, Q-algorithm forwards everything), cause the local request processing-block to immediately respond with Wp*x replies. These replies are shown in the right (response) part of the table in k+1 different-hop copies as they travel across the GNet to the request source.
However, even though the Q-algorithm multiplies the k-hop requests and forwards them to other N connection blocks within the GRouter to be sent as hop=k+1 requests, these requests are dropped by the OFC algorithms within the connection blocks. Our initial assumption is that the OFC algorithms have trouble sending out even the hop=k requests, so the requests with the higher hop value have no chance to be sent out.
Comparing this situation with the 'ideal' broadcast algorithm again, we see that no algorithm could provide the wider request reach than the one we are using. The forward traffic channel is the limiting factor, and the only way to increase the request reach would be to get rid of some requests that are sent over the link but never answered, thus wasting the link bandwidth.
However, it is clear from Table 3 that every request that is sent over the link between servents does result in a response, and no bandwidth is wasted to transfer the requests that won't have a corresponding reply. So our 'Connection 0' algorithm provides the best possible request reach again.
Finally, let us consider the general case, when we do not know in advance which traffic limitation algorithm is involved - maybe the Q-algorithm and the OFC-block can operate at once, each one dropping its share of requests. Indeed, this is something that is possible, and two cases should be considered:
In the first case it is the Q-algorithm that imposes the primary limit on the number of requests, meaning that the requests multiply freely until the Q-algorithm decides that the full broadcast of the N^k requests arriving with hop=k would result in an excessive response traffic.
The traffic layout in such a case is presented in Table 4:
Forward-traffic (requests) |
Back-traffic (responses). (Wp units). |
|||||||||||||||
Ttl |
*** not relevant *** |
1 |
2 |
3 |
... |
k+1 |
||||||||||
Hop |
0 |
1 |
... |
k-1 |
k |
k+1 |
0 |
0 |
1 |
0 |
1 |
2 |
... |
0 |
... |
k |
Messages |
1 |
N |
... |
N^(k-1) |
N^k |
min(N*x,F) |
1 |
N |
N |
N^2 |
N^2 |
N^2 |
... |
x |
... |
x |
Dropped |
0 |
0 |
... |
0 |
N^k - x |
0 |
*** not relevant - full response transmission *** |
|||||||||
Dropped |
0 |
0 |
... |
0 |
0 |
max(0,N*x-F) |
Table 4. The general-case traffic layout with Q-algorithm as a primary limiting factor.
In Table 4 the column hop=k is the first column that has the request propagation limited. The Q-algorithm drops N^k-x arriving requests with hop=k, passing just x requests further to be broadcast on N connections, which might result in up to N*x requests with hop=k+1 on the next link. This is the maximal possible number of hop=k+1 requests. Now let us introduce the new variable F equal to the part of the bandwidth reserved for the forward-traffic still available after the requests with hop<=k are sent. If F>=N*x, all N*x requests with hop=k+1 can be sent; otherwise no more than F requests are sent and the rest are dropped by the OFC block that detects the bandwidth overflow.
So the number of requests with hop=k+1 is min(N*x, F), and the rest of these requests are going to be dropped by the OFC block (max(0, N*x-F) dropped requests). Regardless of the number of requests with hop=k+1 on the link, none of these requests would be passed further to be broadcast by the Q-algorithm, since the Q-algorithm does not even broadcast all requests with hop=k. Thus the local responses are going to be caused only by the forwarded hop=k requests (x total), as can be seen in the right side of the Table 4.
Now, since the Q-algorithm does limit the request broadcast, this means that the back-traffic channel is fully occupied by the responses (this is an only reason why the Q-algorithm would limit the broadcast). So we can see that regardless of the number of requests with hop=k+1, no 'ideal' algorithm would widen the request reach - the situation is similar to the one presented in Table 2.
In fact, if x in Table 4 is equal to zero, the Table 4 becomes effectively equivalent to Table 2 (just with a different k value).
The second case to consider is the OFC block being the primary limiting factor. This case is presented in Table 5.
Forward-traffic (requests) |
Back-traffic (responses). (Wp units). |
|||||||||||||||
Ttl |
*** not relevant *** |
1 |
2 |
3 |
... |
k+1 |
||||||||||
Hop |
0 |
1 |
... |
k-1 |
k |
k+1 |
0 |
0 |
1 |
0 |
1 |
2 |
... |
0 |
... |
k |
Messages |
1 |
N |
... |
N^(k-1) |
F |
0 |
1 |
N |
N |
N^2 |
N^2 |
N^2 |
... |
x |
... |
x |
Dropped |
0 |
0 |
... |
0 |
F-x |
0 |
*** not relevant - full response transmission *** |
|||||||||
Dropped |
0 |
0 |
... |
0 |
N^k - F |
N*x |
Table 5. The general-case traffic layout with OFC block as a primary limiting factor.
Here the N^k requests to be sent with hop=k do not fit into the remaining forward bandwidth F. So N^k-F requests are dropped and just F requests are sent over the link. When these hop=k requests are received, the Q-algorithm has to decide whether these requests should be forwarded for broadcast or not. Let's say that the Q-algorithm decides to forward x requests and drop all the rest (F-x). These forwarded requests result in the x local responses with ttl=k+1 (as can be seen in the right part of the Table 5). They might potentially result in N*x requests with hop=k+1, but it does not happen, since the OFC block drops all of them - it cannot fully send even the requests with hop=k.
If F>x, the Q-algorithm is limiting the request traffic, meaning that it is the response bandwidth that limits the individual request reach. So again, as in Tables 2 and 4, no 'ideal' algorithm would allow us to achieve the wider request reach than the one shown in Table 5 and achieved with the help of the 'Connection 0' (pure router) algorithm.
If F=x, the Table 5 becomes identical to the Table 3. Then the Q-algorithm does not drop any hop=k requests upon arrival, so they are multiplied and would result in N*x requests with hop=k+1, but since the forward bandwidth is not big enough to send these, they are all dropped. This case has been already analyzed previously and it has been shown that it also provides the widest possible request reach.
So this limited-scope modeling shows that the seeming drawbacks of the 'Connection 0' algorithm do not actually result in the decreased individual request reach when such an algorithm is used. The only disadvantage of this algorithm is the excessive request traffic when the response volume is high and the Q-algorithm is used to limit the request propagation. This excessive request traffic, however, does occupy only the bandwidth reserved for the request traffic anyway, and in many practically important cases might be even nonexistent. For example, the typical Gnutella file-searching servent has the low response volume and is likely to use mostly the OFC algorithm to limit the request propagation. In that case the 'extra' request traffic present in Tables 2, 4 and 5 would not be present at all and the traffic layout would be similar to the one presented in Table 3.
At the same time the simplicity of the 'Connection 0' or 'pure router' interface to the local request-processing block makes it very valuable from the implementation standpoint. Since some form of the flow control is necessary in this interface in any case, the alternatives to the 'Connection 0' algorithm are likely to be highly complicated. For example, a separate Q-algorithm or its analog would be necessary to assure that the local responses won't overload the outgoing servent links. This separate Q-algorithm would have to interact somehow with the 'normal' Q-algorithms in order to achieve the fair bandwidth sharing between the local and the remote responses. The precise sharing layout would have to depend on the numerous factors - the local interface bandwidth, the probability of the local response and so on, complicating the matters even more.
So the additional request data sent over the bandwidth that is likely to be left unused anyway is a small price to pay for the architectural flexibility and the simplicity of the servent implementation.
This section describes the practical approach to the Q-algorithm (equations (50-56)) result computation. This process includes the numerical integration of several differential equations (50), (53-55) and even though it is pretty straightforward from the computational mathematics standpoint, it is sufficiently complicated to deserve an explicit description.
The ultimate goal of the Q-algorithm is to determine the share x(t) of the incoming request traffic f(t) that should be forwarded to the other connection blocks for the further broadcast. In practical terms that means that for every new incoming request of size Vreq (the one with the GUID that was not encountered before) its desired number of bytes to broadcast is calculated and then the request is passed to the other connections. This desired number of bytes to broadcast is calculated as Vef =Vreq*x(t)/f(t) - the details of this procedure were described in sections 8.1 and 8.2.2 (equation (84)).
So the value of x(t) has to be known at the moments when the network packets arrive on the connection. That makes the intervals between the network packet arrivals the natural choice for the Q-algorithm step size Tq in time domain. When the network packet arrives, the Q-algorithm input parameters B, b, f, R and tauRtt (see section 8.2) have to be calculated. R and tauRtt are calculated according to the algorithms described in sections 8.2.1 and 8.2.2. b and f are the actual observed incoming rates for the data arriving on the connection. b is the total volume of responses that have arrived from the other connections to the response prioritization block (Fig. 2) during the Q-algorithm step time Tq, divided by Tq. f is the total volume of the requests that have arrived to the Q-algorithm block from the duplicate GUID rejection block (Fig. 2) during the time Tq, divided by Tq.
B is the outgoing response bandwidth estimate. According to (26) (section 7.1), it is found as 2/3 of the outgoing connection bandwidth estimate, calculated from (13,14) (section 6.2). Unlike the values for f and b, this estimate is performed every time the OFC PONG echo returns for the outgoing packet. So the new value for B becomes available not when the requests are received, but when the OFC PONG is received, which might happen at different time moments. Thus regardless of the Q-algorithm time step Tq, the equation (55) that calculates Bav from B, works in its own time scale (with different step size Tb) to find the averaged value of Bav from B. The latest available value of tauAv is used by the equation (55) in the process.
After all these Q-algorithm input variables (B, b, f, R and tauRtt) are computed, we can make the Q-algorithm step. This step can be described as calculating the 'new' values (at time t) for the output variables (Q, u, x, Rav, Bav, bAv and tauAv), using the 'old' (at time t-Tq) values of the output variables and the 'new' values of the input variables. Since the equations (50-56) are dependent on each other and several equations can use the same variable (for example, tauAv is used in (50) and (53-55)), the order of the computation might be important from the numerical stability standpoint. Generally we want to use the latest available variable values in any particular equation.
Setting aside the Bav variable that is calculated in a different time scale, and the final Q-algorithm output x, which is calculated from the equation (52) as min(f, Q/Rav) when the 'new' values for Q and Rav are known, we can concentrate on the subset of the output variables. This subset includes Q, u, Rav, bAv and tauAv. In this subset Q depends on u and tauAv, u depends on Q and Rav, Rav depends on tauAv, bAv depends on tauAv, and tauAv depends on bAv. As we start doing the step, we have the old values for all these variable from the previous step output or from the initial conditions.
First, we can determine the new value of bAv from (54), using the old value of tauAv (and the input variable b). This new bAv value (together with the latest Bav value) allows us to arrive to the new tauAv value, using (56).
The new value of tauAv makes it possible to find the new value for Rav from (53), using also the input value R. The new value of Rav, in turn, makes it possible to find the new value for u from (51) (using an input variable f and the 'old' value of Q), and that operation allows us to calculate the new Q value from (50). Finally, the equation (52) can be used to calculate the new value for x.
Note that the order described above is not an only way to calculate the new output variable values. For example, we might start with calculating the new tauAv value from (56) using the old bAv value and the latest Bav value, and only after that use (55) to compute the new bAv. This approach, however, would result in the latest response data b being effectively unused in the tauAv calculation. It would allow us to use the same new tauAv (averaging time) value in all differential equations (50) and (53-54), but it would also result in a slower Q-algorithm reaction to the response bursts. This averaging time value would be only 'half new' - it would not reflect the latest responses, and for this reason such an approach is not desirable. Even though the Q-algorithm was specifically designed to avoid the quick control actions and to change the request broadcast rate x(t) slowly and gradually in order to increase the algorithm stability, it is better not to introduce the additional instability (like this 'half-old' tauAv variable) into the algorithm if possible.
The intentionally slow reaction speed of the Q-algorithm also makes it possible to choose a variety of other time steps instead of the one between the two network packet arrival times described above. The developer might find the amount of computations necessary when such a time step is used to be excessive, since the broadcast rate x(t) is likely to change very slowly. So the developer might choose the bigger time step for the Q-algorithm, using the latest computed value of x(t) to determine the request broadcast rate in the meantime.
To a certain extent, such a step size increase is certainly possible - it is just important to be careful and to remember that all the information received during this bigger time step has to be used in the Q-algorithm computations. For example, x(t) should not exceed f(t) (as can be seen from (52)). So if the bigger step time is used, f(t) should be also computed from the data arriving to the connection block over that bigger time. No request can have the desired number of bytes to broadcast (Vreq*x(t)/f(t)) bigger than the request size Vreq, so the ratio x(t)/f(t) should never exceed 1. Similar considerations apply to all the other input variables - for instance, the estimated response channel bandwidth B used in (50) should not be just a latest sample of this variable, but should be the averaged value of the bandwidth estimates observed during the step time.
Regardless of the Q-algorithm step size choice, it is important to remember that the Q-algorithm contains four differential equations - (50), (53), (54) and (55). The similar differential equation (49) controls the Q-block of the RR-algorithm. All these equations are essentially the 'averaging' ones - they are used to compute the exponentially averaged values of the quickly changing input variables. The quickly changing character of the input data makes it necessary to be careful when performing the calculation 'step' to find the new value of the output variable.
Let us consider the model differential equation
This equation is similar to the equations used by the Q-algorithm and we can use it to illustrate the possible approaches to averaging of the quickly changing function z(t). Let us use the index i to designate the variable values at time t-dt and i+1 - to designate the values at time t, at the end of the calculation step with time dt. The simplest way to compute yi+1 is
The first problem associated with this solution is that it is numerically unstable when the large time step dt is used. For example, if we set dt=3*tau and z(t)=0, it is easy to see that (87) becomes
that is, instead of converging to zero, y(t) starts to oscillate with an increasing magnitude. This problem can be avoided if the time step dt is decreased or the 'forward-looking' form of (87) is used:
Finding yi+1 from (89), we can write it as
This solution is numerically stable regardless of the dt value, though, of course, its precision drops rapidly as dt grows. For example, setting z(t) = 0 again, we can compare the precise solution
and the numerical solution
It is easy to see that dt=0.1*tau results in the 0.5% error for yi+1. dt=tau gives us the 36% error, and when dt=3*tau, the error grows to 400%.
Thus regardless of whether the equation (87) or (90) is used, the excessive values for dt are not desirable in any case.
Another problem associated with the numerical solution (87) of the equation (86) is the 'overshoot problem'. Due to the fractal nature of the network traffic the functions averaged by the Q-algorithm normally have a very high variance - in other words, these functions are very 'bursty' and can have very high (potentially unlimited) peaks. On the intuitive level it is clear that since the equation (86) produces an 'averaged' value of the input function z(t), its output function y(t) should converge to the zi+1 value as the time step dt grows. However, since the equation (86) contains the (dt/tau)* zi+1 component, its result yi+1 can 'overshoot' the target value of zi+1 as the step size dt grows to dt>tau.
This is a purely numerical effect that is closely related to the already discussed issue of the solution (87) numerical stability. In fact, it is easy to see that the same approach (90) used to counter the numerical instability of the equation (87) also solves the 'overshoot problem'. As the time step dt grows to dt>>tau, the yi+1 calculated from (90) does converge to zi+1. Of course, this numerical convergence is not an exponential one dictated by (86) - yi+1 changes as zi+1/(1+tau/dt) as dt grows. Even though this convergence is qualitatively correct and cannot, for example, lead to numerical overflows (as it is the case with (84)), the convergence rate is still wrong, which underscores the need to choose the time step dt<tau.
If the Q-algorithm averaging time tauAv is very low (or the intervals between the request packets arrival Tq are very high), it might so happen that Tq>tauAv (or dt>tau in terms of the equation (86)). Then the step interval Tq has to be broken into several smaller sub-steps dt<Tq, and the integration should be performed in several steps. Every sub-step should use the same value of z(t)=zi+1 (whatever that is in the context of the particular Q-algorithm differential equation) and the newly calculated (on the last sub-step) value of the output function y(t). Of course, if the input function z(t) is available in several points inside the integration interval [t-Tq,t], these values can be used instead of the constant value z(t)=zi+1 that would be the same for the whole integration interval. That would increase the integration precision, but typically won't be necessary unless the integration interval Tq is too long or these multiple values for z(t) would be available anyway and it won't matter whether to use the single or the multiple values for z(t) from the CPU load standpoint.
The next consideration to have in mind is the relationship between the equations (50) and (51). The equation (50) calculates the new value for Q, using (among other variables) the latest known value for the 'estimated underload' factor u, which, in turn, depends on Q through (51).
That relationship makes the choice of the time step dt especially important for the equation (50), since as the value of Q passes through the f*Rav barrier, the shape of the function u and the right side of the equation (50) change dramatically. If u>0 (Q>f*Rav), the right side of the equation (50) becomes proportional to (rho*B - f*Rav) and no longer depends on Q. Effectively the equation (50) can be treated as two different equations depending on the value of Q:
(93) | dQ/dt = - (beta/tauAv)*(Q - rho*B); Q<=Bav, | if | Q <= f*Rav, and |
(94) | dQ/dt = - (beta/tauAv)*(f*Rav - rho*B); Q<=Bav, | if | Q > f*Rav. |
The equation (94) is not the equation of the same type as the equation (93). The solution of the equation (94) is a simple linear function of the form Q(t)=A*t+C, so all the considerations presented above for the equation (86) do not apply, and any time step dt can be used in that equation as long as the original assumption Q>f*Rav remains valid. However, since the behavior of the equations (93) and (94) might be very different, we have to keep track of whether Q does pass through the f*Rav barrier during the Q-algorithm step Tq, and change the equation, if necessary, in the middle of the Q-algorithm step.
One of these equations ((93) or (94)) should be chosen at the beginning of the time step dt=Tq according to the old value of Q=Qi and to the new values of f and Rav. After that, the equation similar to (87) or (90) can be used to check whether the value of Qi+1 would deviate sufficiently from Qi to make the original choice of the equation (93) or (94) invalid at the end of the interval Tq. If that happens to be the case, the interval Tq should be broken into two at the point t where the original equation choice becomes invalid, and the separate equations ((93) or (94)) should be used in both parts of the interval Tq.
Since all the variables in the equations (93) and (94) except Q (that is, beta, rho, tauAv, B, f and Rav) are effectively constant on the interval [t-Tq,t], four scenarios are possible:
Qi <= f*Rav and Qi > rho*B. In this case Qi+1 < Qi, and the equation (93) is used on the whole time interval [t-Tq,t].
Qi <= f*Rav and Qi <= rho*B. Then the equation (93) is used in the starting point of the interval [t-Tq,t] and Q starts to grow. If Qi+1 > f*Rav when the time step dt=Tq is used, it means that somewhere in the middle of the step Tq the equation (93) has to be replaced by the equation (94). Then also f*Rav < rho*B - otherwise Q would never reach the f*Rav, since it would not exceed rho*B. Thus the equation (94) causes Q to keep growing, and this time point when the equation (93) had to be replaced by (94) is a single equation-replacement point on the interval Tq. So the equation (94) can be used up to the end of the interval (unless Q exceeds Bav, in which case the Q<=Bav limit applies).
Qi > f*Rav and f*Rav <= rho*B. In this case Qi+1 >= Qi (unless it is already limited by the condition Q<=Bav) and the equation (94) is used on the whole time interval [t-Tq,t].
Qi > f*Rav and f*Rav > rho*B. Here the equation (94) is used in the starting point of the interval [t-Tq,t] and Q starts to decrease. If Qi+1 < f*Rav when the time step dt=Tq is used, it means that somewhere in the middle of the step Tq the equation (94) has to be replaced by the equation (93). Then Q = f*Rav > rho*B, and the value of Q would keep decreasing, so again this equation-replacement point is a single such point on the interval [t-Tq,t].
In other words, in no case the interval [t-Tq,t] should be broken into more than two intervals, and no equation-replacement procedure can result in the Q(t) changing direction on this interval.
There is another way to achieve the same goal. When Q is changing towards f*Rav, we can perform the numerical integration of the equation (50) with step Tq as a sequence of smaller steps dt, each of which would result in just a small change in the value of Q (for example, no more than 0.1*f*Rav). All the Q-algorithm input parameters should remain the same on every sub-step within one step, but the value of u should be recalculated at the end of the each sub-step according to (51). Since the equations (93) and (94) are equivalent when Q=f*Rav, the potential ~10% imprecision at the moment when the equations are replaced should not cause any major numerical errors.
The requirement dQ <= 0.1*f*Rav leads to the following sub-step dt limit for the equation (50):
From equations (50) and (87) we can see that this requirement is equivalent to the following sub-step dt size limit when u=0 (which is equivalent to using the equation (93) for this sub-step):
If the equation (90) is going to be used for the numerical integration of (50), this condition would be a little different:
The expression (96) is valid only when (rho*B - Q) > 0.1*f*Rav. Otherwise, no sub-step time can possibly result in the excessive Q value change (dQ never exceeds 0.1*f*Rav), so any step size can be used. The additional condition Q<rho*B in both (95) and (96) is used to limit the sub-step size only if Q grows and can potentially exceed f*Rav; if this is not so, any step size dt can be used.
Naturally, regardless of whether the equation (87) and condition (95) or equation (90) and condition (96) are used, the usual step size conditions still apply. dt should be less, and preferably much less, than tauAv.
When u>0, which is equivalent to using the equation (94) for this sub-step, from (50) and (51) we can arrive to the following requirement for the sub-step dt size limit:
Here Q(t) is a linear function and no other conditions apply to the dt sub-step size. The additional condition f*Rav>rho*B is used to limit the step size only if Q decreases and can potentially drop below f*Rav; if this is not so, any step size dt can be used. Q-f*Rav is used instead of 0.1*f*Rav in (97) since Q(t) is linear and the value of Q=f*Rav can be reached in a single sub-step (if the step size Tq is big enough to do it on this Q-algorithm step at all). Otherwise, small values of f*Rav might result in the very small sub- step size dt and lead to the excessive number of computations necessary to make just a single Q-algorithm step Tq.
The expressions (95-97) can become formally undefined when the denominators in these expressions are zero, or can lead to the dt=0 when f, Rav or tauAv are equal to zero. However, generally such situations arise when the same equation ((93) or (94)) has to (or can) be used during the whole interval [t-Tq,t] anyway. For example, f*Rav=rho*B when u>0 means that Qi+1=Qi. If f*Rav=0, we can assume that u>0 and effectively use the equation (94) on the whole interval [t-Tq,t]. And as it was already mentioned, when u=0, rho*B-Q<=0.1*f*Rav and the equation (50) is integrated with the method (90), no sub-step time can possibly result in the excessive Q value change. Thus even if the change of the equation is required somewhere inside the interval [t-Tq,t], we can still keep using the same equation and it won't cause a major numerical error.
So whenever such a problem arises and the time step value dt formally becomes zero or undefined, we can safely assume that any value of dt will suit our needs (of course, within the normal numerical stability- and precision-related step size limits). Actually very small dt values might be more dangerous than zero or undefined ones. The arbitrarily small sub-steps might result in a very high computational volume required to do just a single Q-algorithm step Tq - this is why the additional conditions are introduced in the expressions (95-97). These additional conditions (Q<rho*B for (95,96) and f*Rav>rho*B for (97)) limit the applicability of the corresponding expressions and impose the time step limit dt on the calculations only when Q is moving towards f*Rav boundary and can potentially cross it. In addition, the Q-f*Rav multiplier in (97) is used to make sure that if u>0 just a single sub-step would be required to make a single Q-algorithm step Tq or to cross the Q=f*Rav boundary (whichever happens first).
This multi-step approach to the integration of the equation (50) is more computationally intensive than the explicit integration interval [t-Tq,t] division and multiple-case analysis suggested earlier. However, it is simpler to implement and can be preferable when the load that is placed on the CPU by this method is not excessive.
All other differential equations ((53-55) and (49) in the Q-block of the RR-algorithm) do not require such a complicated approach to the choice of the time step size. It is enough to keep the value of beta*dt/tau<=0.1...0.4, which would keep the numerical error within about 0.5%...10% range and allow to use the simpler equation (87) instead of (90). Since in the practical Q-algorithm implementation the recommended value of beta is 1, this requirement can be translated into
If this requirement cannot be satisfied when dt=Tq, the integration should be performed in several smaller steps, each of which should satisfy (98).
A few words should be said about the Q-algorithm input value B - the response bandwidth estimate. As it was already mentioned earlier in this section, the individual estimates of B are computed when the OFC PONGs arrive on the connection. This does not necessarily coincide with the arrival of the requests on the same connection, so effectively B is computed and averaged (by the equation (55)) in its own time scale. Of course, the usual integration step size limitation (98) applies to the equation (55).
The resulting Bav value is used in the equation (56) to determine tauAv. The Bav is a slowly changing variable, so it does not matter that it is calculated in a time scale that is different from the time scale used to calculate the other Q-algorithm variables.
However, the quickly changing variable B is directly used in the equation (50), so some special measures should be taken to assure that the right value of B is used there. Let us consider the model situation when two or more OFC PONGs arrive during the single Q-algorithm step Tq. If just the latest available value of B would be used in (50), all other B values would be unused, which would result in the increased Q-algorithm error. So if more than one estimate of B is performed during the Q-algorithm step time Tq, some averaged B value should be used in (50) in order to fully use all the available information about the response channel bandwidth.
Theoretically, the 'normal' averaged value Bav could be used; however, this is not desirable, since it would effectively double the Q-algorithm reaction delay tauAv. Besides, such a long-term averaging with a characteristic time tauAv is not functionally necessary here anyway - the equation (50) already provides the long-term averaging. In fact, the equation (50) was specifically designed to average the quickly changing input data B. What we need here is not a long-term averaging to smoothen the rapidly changing function B(t), but rather a very short-term averaging with a characteristic time Tq. This short-term averaging is used to combine all the available information about the response bandwidth during the time Tq into a single variable Bi+1 to be used in the numerical solution of the equation (50).
The situation is complicated by the fact that the exponential averaging with an averaging time Tq (Q-algorithm step size) cannot be used. The value of Tq is normally defined by the interval between the arrival of the network packets with requests and is not known in advance. So the simple averaging is more appropriate here. Further, it is logical to use the weighted averaging, giving more weight to the values of B calculated after the bigger time delay deltat (time that has passed since the last OFC PONG arrival and the last B estimate).
This choice of weighting is determined by the physical nature of the data-sending process. Consider the following sequence of alternating B estimates: B=2 KBytes/sec after 1000 ms delay, then B=1 KByte/sec after 500 ms delay and so on. The straight averaging would give us 1.5 Kbytes/sec bandwidth estimate. However, the 2 KBytes/sec value is valid longer than the 1 KByte/sec one. From the data-sending standpoint, during the 1500-ms period we should be able to send 2 KBytes during the first 1000 ms, and 0.5 KBytes during the remaining 500-ms interval. So the total bandwidth would be (2KBytes+0.5KBytes)/1.5sec=1.67KBytes/sec. This estimate is equivalent to the weighted averaging method suggested above.
Using the index m to mark the individual B estimates and the corresponding delays during the Q-algorithm step time Tq, we can write this averaging method as
where Bi+1 is the averaged value of the response bandwidth estimate to be used in the numerical solution of the equation (50) to find the value of Qi+1 on the Q-algorithm step number i+1.
This averaging method does not have the excessive storage requirements even when many OFC PONGs arrive during the single Q-algorithm step. We just have to remember two sums (for Bm*deltatm and for deltatm), the latest values of the same variables and the arrival time of the latest OFC PONG. When the connection is opened, the process is started with the zero values for all these variables. As soon as the first OFC PONG arrives, we have our first non-zero bandwidth estimate. Of course, there's no interval deltat associated with it, so we should just use this single Bm value as the Bi+1 in the meantime. The second OFC PONG brings the first deltat value, which should be also used to weight the first estimate (so the first average is going to be just the average between the two Bm values). All the subsequent OFC PONGs can use (99) in the normal fashion.
When the requests arrive on the connection, the equation (99) is used unless the sums in it are equal to zero - if this happens, the saved Bi+1 value from the last Q-algorithm step is to be used. (Naturally, before the first OFC PONGs arrival this value is equal to zero too). After the Q-algorithm step is performed, the Bi+1 that was used in it is saved to be used if no OFC PONGs would arrive during the next Q-algorithm step. At the same time both sums in (99) are zeroed. If some OFC PONGs do arrive during the next Q-algorithm step interval Tq, the sums are increased; otherwise the saved Bi+1 value will be used.
Note that the first deltatm value to be used during the Q-algorithm step Tq might be related to the bandwidth estimate interval that starts before the Q-algorithm step interval [t-Tq,t]; thus it is entirely possible to have sum(deltatm)>Tq. This approach allows us to fully cover the timeline t with the response bandwidth estimates and results in the correct bandwidth B averaging for the equation (50). Logically this averaging approach for B is similar to the technique used in section 8.2.1 to average R(t) over the Q-algorithm step size Tq.
Finally it should be noted that all the equations presented in this section assume that the variables are the floating-point real numbers, and all the computations are performed as the floating-point operations. Normally the modern CPUs have the fast floating-point arithmetic and this approach should not present any problems. If, however, the GRouter code is implemented on the hardware where this approach results in a low performance, the equations can be easily written in the integer or fixed-point arithmetic terms. This operation, however, requires some careful analysis of the operational ranges and required precision of the variables and is outside the scope of this document.
This section describes the possible allocation of the bits in the Globally Unique Message Identifier (GUID) of the OFC messages - the PINGs and PONGs that are used by the Outgoing Flow Control block. The standard Gnutella 128-bit GUID is used as an example, though the same approach to the GUID data layout can be easily applied to the other similar broadcast-route networks.
The message header in the Gnutella protocol contains the 128-bit (16-byte) GUID that is used for various purposes - to drop the incoming requests if these requests were already received earlier, to route back the replies and so on. The protocol does not clearly define the method to create the GUID - it just has to be sufficiently random to avoid having two messages with the same GUID. In practice, some implementations use the Windows function CoCreateGuid(), some use the pseudo-random number generator - the specific method is not important as long as the returned result is really unique.
The 128-bit size of the GUID provides a very high degree of 'uniqueness' that is not really necessary in practice. The message lifetime is finite and for all practical purposes the GUID can be considered 'unique enough' if it has a very low probability of collision with all the other GUIDs it can be possibly compared with during its lifetime. This probability can be estimated as the probability of meeting the same GUID value in the routing tables of all the servents that the messages with this GUID can possibly reach. In the typical Gnutella environment the request can reach about 10,000 servents and each of these servents can have the routing table with about 10,000 entries. So it can be potentially compared with up to 10^8 other GUIDs. In fact, this number is much lower since the routing tables on different servents have many identical entries, but let us use the 10^8 number as the upper boundary for the number of the other GUIDs that the GUID can be compared with.
10^8 is approximately equal to 2^27, so with the k-bit GUID the probability of the collision for the individual GUID is going to be about 2^(27-k). If the servent issues one request per second for 10 years, it generates about 2^28 GUIDs, so the probability of the individual servent ever (once in 10 years) having its GUID confused with some other GUID is about 2^(55-k).
If the GNet has 10^12 (that is, 2^40) servents (several hundred servents for every human being on the Earth), the probability of any one of these servents ever having the collision-related problem during the 10-year interval is about 2^(95-k). For a k=128 bit GUID this amounts to 2^(-33), or about 10^(-10). By any standards, this is a very low probability, especially if we remember that nothing really devastating happens when the GUID collision does occur. Any hacker with an idea of mounting the attack on the Gnutella network can effectively simulate the GUID collision on his servent, so the GNet has to deal with this problem anyway, whether it would be the result of the statistical GUID collision or the result of the hacker attack.
All these considerations have prompted various groups in the Gnutella community to come up with various suggestions that would effectively limit the GUID size with a smaller number of random bits and use some parts of the GUID for various GNet infrastructure-related purposes.
For example, the LimeWire proposal suggests using the bytes 8 and 15 of the GUID (assuming the byte numbering from 0 to 15) for the protocol versioning. This proposal effectively leaves 112 'random' bits in the GUID, which is still more than enough for any conceivable purpose.
The OFC block does a similar thing. It has to mark the OFC PING with such a GUID that when the OFC PONG returns from the peer, this GUID could be recognized as an answer to the OFC PING and the appropriate action (network packet sending, bandwidth estimate calculation, etc) could be performed.
The OFC packets have ttl=1 and are supposed to travel only between two connected peers. The host that sends the OFC PING and receives the OFC PONG in response makes all the OFC decisions. The host that receives the OFC PING does not analyze its GUID or undertake any actions on the basis of this PING except for the normal PONG-sending operation. Thus the OFC GUID layout does not have to adhere to any GNet-wide standards - if the host is able to recognize its own OFC GUIDs that are returned back to it in OFC PONGs, it is quite enough for the Outgoing Flow Control algorithm purposes.
Still, the OFC GUID layout and the algorithms that work with it have to satisfy several basic requirements:
The OFC algorithms should not rely on the PING being correctly interpreted by the peer. Even though the PING with the ttl value of 1 is not supposed to be forwarded to any third host, some buggy and/or protocol non-compliant servents are likely to do so. The GUID layout and handling procedures should be prepared for that possibility. For example, one PING might result in several PONGs, the same PING or PONG can appear on the different connection and so on.
Even though the peer is supposed to answer to the 1-ttl PING with the PONG, it is likely that for a variety of reasons certain percentage of PINGs won't result in the corresponding PONGs. So if the PONG does not arrive after a certain reasonable timeout (~1 sec), the PING should be retransmitted. However, if the peer is just overloaded and both PINGs would eventually result in PONGs, the 'additional' PONGs should be discarded by the OFC block.
The OFC GUID should be reasonably unique. Even though it has a very low travel distance (just one hop) and thus has the far smaller collision chance than the normal request, it still should have enough unique bits to avoid the collision.
If the collision does happen, it should not result in any disastrous effects for the servent.
The algorithms should be resistant to the malicious peers that might try to influence the OFC decisions by deliberately changing the GUID in the OFC PONGs.
Of course, there are many possible ways to generate the OFC GUID layouts that satisfy these conditions. Perhaps the simplest one is to generate the normal GUIDs and to remember these in the separate OFC 'routing' tables (one table per connection). These tables would not be actually used to route anything - just the GUID-searching functionality of the normal routing tables should be implemented in order to recognize the incoming 0-hop PONGs as the OFC messages. When the connection OFC GUID table is not empty (the connection is waiting for the OFC PONG), the connection also has to store the OFC packet information - the sending time and the packet size for the last OFC packet sent out.
This information is necessary for the OFC block to perform the outgoing bandwidth estimate as defined in the section 6.2 (equations (13) and (14)) and section 7.3 (equations (45) and (46)). According to the sections 6.1 and 6.2, if the OFC packet (the block of data between the two OFC PINGs) is sent as several TCP/IP (network) packets, we have to remember the whole OFC packet size, not the sizes of the individual network packets. This is the size of the OFC packet 'payload' - the 'trailer' OFC PING size is not included into it and is treated as the OFC overhead - we are interested in the bandwidth available for the regular request/response messages. The OFC packet sending time (when the sending is done with several subsequent networking calls) should be the starting time of the first call. Even though the GRouter is supposed to make these calls immediately one after another, this sequence of calls can still take some noticeable time, and we are interested in the roundtrip time between the whole sending operation start time and the OFC PONG arrival time.
Normally there's just one GUID in such a 'routing' table. The additional entries might be added when the OFC PING is being re-sent because of the PONG loss or excessive (more than ~1 second or so) PONG delay. This operation does not change the last OFC packet size and sending time stored in the connection block. Since the retransmitted PINGs are treated as an overhead, the lost or delayed PONGs just increase the roundtrip time for the same payload, effectively lowering the connection bandwidth estimate.
In fact it might be even possible to use the same GUID for the retransmitted OFC PINGs. However, this approach is not recommended, since it is desirable for the OFC algorithms to be independent of the specific peer implementation. Using the same GUID for the retransmitted PINGs might lead to the OFC deadlock. If the peer is implemented in such a way that it can lose the first OFC PONG as a result of the networking call error or for some other reason and then just keep rejecting the retransmitted PINGs with the same GUID, the OFC PONG might never arrive from it.
As the PONG with hop=0 and ttl=1 arrives on the connection, its GUID should be checked against this OFC routing table. If the match is found, all the necessary calculations should be performed - the bandwidth estimate should be updated, the OFC packet with messages should be sent out (including the new OFC PING at the packet end) and so on. Before the new OFC PING is sent, all the GUIDs should be removed from the OFC routing table. This is necessary to avoid the similar reaction to the possible delayed or duplicate PONGs with the same GUID, to the PONGs that come in response to the other (re-sent) PINGs, and to the delayed PONGs that come in response to the PINGs that have been sent earlier.
This 'normal GUID' approach is certainly usable. However, we have to perform the GUID searches in the table just to answer a simple question: is this PONG one of the PONGs that we are waiting for? Given the fact that the table in question probably never has more than 10-20 entries (the servent is likely to time out the connection as unresponsive after that), the GUID table search seems to be an excessively complicated solution for such a simple task.
So the alternative method is to effectively break the GUID bits into two groups: the randomized connection identifier GUID-C and the PING sequence number SEQ. The PING sequence number is some number, which would allow us to uniquely identify the OFC PING when the OFC PONG is received in response. For example, this number can be set to zero for the first OFC PING sent on the connection and then incremented for every subsequent OFC PING. Naturally, sooner or later this number would overflow and become zero again, so it should have enough bits for the delayed PONGs to be out of SEQ range regardless of the delay and still leave enough GUID bits for the connection identifier GUID-C.
For example, if the full GUID has 128 bits, 16 of which are used for the version tracking, we have 112 bits for the GUID-C and SEQ. The rollover time of the SEQ is determined by the lowest possible interval between the outgoing OFC packets and by the biggest possible PONG delay in case of the buggy peer implementation. If, say, the GNet is deployed on the 100-Mbits/sec LAN and 300-byte packets are used, the interval between the outgoing packets can be as low as 25 microseconds. On the other hand, the worst-case PONG delay can be as high as several hundred seconds. The ratio between these numbers can be as high as 10^7, or 2^23, so the SEQ number should have at least 24 bits, leaving 88 bits for the GUID-C. This value is still quite large and the collision probability is within the acceptable limits. In fact, in the practical implementation it might be even possible to allocate 32 GUID bits for the SEQ. That would allow the GRouter code to avoid the problems related to the 24<->32 bit internal conversion and alignment, and the remaining 80 bits for the GUID-C would be enough to avoid the GUID collisions.
When the GUID contains the OFC PING SEQ number, every OFC PING sending operation updates the acceptable range of the incoming PONG SEQ numbers. This range is associated with the connection, and both ends of this range are set to the PING SEQ value as the trailer OFC PING is sent after the OFC packet. If no PONGs are received in response, the subsequent OFC PING packets are sent with the incremented SEQ values and the acceptable range is also extended. When the OFC PONG finally does arrive with the GUID-C part of the GUID being the same as the connection's GUID-C and the PONG SEQ number within the acceptable range, the necessary OFC operations are performed. After that the acceptable SEQ range is cleared and the procedure is repeated.
Of course, the presence of the SEQ information in the GUID makes it possible for the malicious peer to 'spoof' the OFC algorithm. The malicious host can receive the OFC PING, modify the GUID and send the PONG with the modified GUID back. However, it is hard to imagine why the malicious host would engage in this activity. There's not much sense in modifying the GUID-C, since such PONGs would be just dropped by the OFC block, which won't recognize these packets as the valid OFC PONGs and would eventually close the connection as 'unresponsive'. The same thing would happen if the malicious host would modify the SEQ numbers in a way that would place these numbers outside the acceptable SEQ range. If the SEQ number is modified to be still within the acceptable range, the only result achieved by the attacker might be the lowered bandwidth estimate. This can only decrease the bandwidth available for any DoS attack undertaken by the attacker, which makes such a GUID 'spoofing' attempt useless.
Another way to 'spoof' the GUIDs would be to predict the future SEQ numbers and send the OFC PONGs before the corresponding PINGs are received. If the malicious host performs such an operation accurately, it can create an impression of the 'infinite' bandwidth between the attacked host and the attacker.
This attack makes more sense, since it can cause the attacked host to fully broadcast the requests from the attacker, which is something that could have been prevented by the attacked host Q-algorithm if it would have the correct bandwidth estimate. However, this approach would not allow the attacker to do anything more harmful than the 'normal' DoS attack that would be fought by the fair bandwidth sharing algorithms and the Q-algorithms of the GNet servents in the usual fashion. Furthermore, this erroneous bandwidth estimate might quickly overload the link with messages, increasing the link latency, causing the TCP overloads and eventually resulting in the connection shutdown and the attack termination.
*** End of the Document ***
Back to the Gnutella development page.