Washington University in St. Louis



Supercharged PlanetLab Platform

Michael Williamson

Washington University in St Louis

Background

Internet Routing

The modern internet is a massive interconnection of millions of computers situated all around the globe. As the name suggests, any computer connected to the internet has the ability to communicate with any of the other millions of machines. How does this amazing process take place? At the backbone of the modern internet are packet switched networks. Anytime a machine wants to send a piece of information, it simply takes that data, adds on various control headers, and it transmits the resulting “packet” into whatever link it is connected to. Any link can only handle a limited amount of data, and each can be shared by any number of people.

Once a packet leaves its source machine, the internet infrastructure, through a seemingly magical process known as “routing,” ensures the data reaches the proper destination. The pieces of machinery that make this essential process possible are known as routers. From a fundamental perspective, a router is nothing more than a sophisticated direction giver. When a packet arrives, the router scans the packet’s control headers. After comparing the headers to internally stored route information, the router transmits the packet onto the appropriate link.

With internet use exploding, routers are becoming more and more important in the everyday lives of just about everyone. Thus, any technology that makes a router faster, more efficient, or more useful can be extremely valuable. At Washington University in St Louis, the goal of the people in the Applied Research Laboratory or ARL is precisely that. They are constantly trying to create the next best technology for internet routers.

Overlay Networks

When two networked computers want to communicate, they do not necessarily need to use the internet. If the two nodes are located close to one another, all they need is a wire directly connecting the two as shown in Figure 1.

On the other hand, when two computers are separated by thousands of miles, it might be extremely costly for a user to run a direct cable between the two. In this situation, using the internet is much more practical. Instead of running a several thousand mile wire, all the user of Node A has to do is connect a cable to the nearest internet connection as shown in Figure 2. Then, through the magic of internet routing, the data Node A sends is automatically delivered to Node B.

An overlay network is the combination of these two concepts. It gives the illusion of direct links between nodes over the public internet. Naturally, these “direct links” are not actual physical wires running between the machines. Instead, through various packet addressing tricks, an application on Node A has the illusion that it is directly communicating with Node B. Physically, the network topology looks like Figure 2, but logically, it looks like Figure 1.

An easy way to implement an overlay network is to use an IPV4 tunnel. A programmer assigns one logical IP address to every node within the network. Then, he or she writes a kernel module that keeps a mapping of logical IP addresses to physical ones. Whenever an application tries to send a packet to an address that belongs to the overlay network, the module encapsulates the data inside a packet that contains the correct physical address corresponding to the logical destination node. In this scenario, the application is not concerned with the physical layout of the network. Any member node could have multiple physical addresses, and the application will not care. All it needs to know are the logical addresses of the nodes it is communicating with, and the kernel module will take care of the physical addressing for it.

Recently, researchers have found an ever increasing number of applications that make use of overlay network technologies. One of the more popular uses of such a network is the PlanetLab research platform.

PlanetLab Platform

Ideally, any software developer wants to be able to test an application in a real world situation using real data. In the case of a distributed application, this is not an easy task. The developer would need to use multiple machines located at separate locations possibly thousands of miles apart connected through the internet. This sort of real world testing is essential so that the developer can see how the application reacts to factors such as real world packet delay and communication loss.

Unfortunately, most computer scientists do not have access to machines at several widely spread locations. For this reason, researchers developed and deployed the PlanetLab Platform. Essentially, PlanetLab is a network of several hundred “nodes” located at various places all over the world and connected by the public internet.

A PlanetLab “node” is simply a standard server running the PlanetLab software (a modified version of Linux). A user of the platform is allocated a “slice” on a subset of these nodes. He or she can then login to any of those nodes and run applications that are capable of communicating with any of the other nodes within that user’s subset. Recently, PlanetLab has become a useful tool for running large scale internet experiments as well as for the development of new networking protocols.

Performance Limitations

As PlanetLab has grown in popularity, so has the need for a system with better performance. The problem with the existing platform is that each node is only a standard server. The isolation between user slices is only at the application level; there is no hardware to back it up. As a result, node delay can be considerable. Take the simple example of a user trying to route traffic from Node A in Figure 4 through Node B to Node C. To do any processing on packets at Nodes B and C, the user’s application has to wait for the operating system to give it time on the CPU. Depending on the timing granularity of the operating system, this can take as long as tens of milliseconds. As the number of users and the amount of traffic using PlanetLab nodes increases, this problem is only going to get worse. High throughput and latency dependent applications will become increasingly impractical or even impossible.

Supercharged PlanetLab Platform

Introduction

The Supercharded PlanetLab Platform or SPP is the answer to the performance limitations of the original technology. The SPP is not a standard server.  Although one of its components is one or more general purpose machines (also known as general processing engines or GPEs), it also consists of any number of linecards (LCs), any number of specialized networking processors (also known as network processing engines or NPEs), and a control processor (CP).  The SPP uses all this extra hardware to allow a user to allocate resources on an NPE, also known as allocating a “fastpath”. From his or her slice running in a virtual machine on a GPE, the user can then manipulate the fastpath thus controlling the traffic flowing through it. The end result is a system that is capable of processing a user’s networking traffic with little or no intervention from the general processing machine. While the original PlanetLab platform only supports software resource isolation between users at the application level, the SPP provides this as well as hardware resource isolation. Naturally, this drastically improves the performance of the PlanetLab Platform.

Implications for Routers

Virtualization is one of the main focuses of current computer science research. The applications of this technology have proved extremely useful in industry. For example, platform virtualization involves separating an operating system from the underlying system resources. It has allowed server operators to install multiple operating systems that run concurrently in isolation on a single set of physical components. As one might expect, this makes servers significantly less expensive. In the past, one needed to buy separate physical machines to support multiple operating systems. Now, that is no longer necessary.

While the SPP can be used to improve the performance of current PlanetLab applications, that is not its intended purpose. The goal of the technology is to virtualize a router. Normally, a router resembles the box in Figure 5. Packets enter the unit on one of the physical interfaces to the left. Next, they are queued until the network processor becomes available. When it is free, the processor does a lookup into the forwarding table based on the packets’ headers. As a result of that lookup, the packet gets copied to the appropriate egress queue where it waits until it gets transmitted onto the corresponding physical interface.

Usually, there is a single router instance implementing a solitary network layer protocol per physical device. With the SPP, this is no longer true. As shown in Figure 6, a fastpath can also be thought of as a meta-router. The queues, the forwarding table, the switching fabric, and the network processor normally associated with a router are all available within a fastpath. Because the SPP supports multiple fastpaths, it also supports multiple meta-routers all running in isolation from one another. Naturally, there will only be a limited number of physical interfaces connected to the SPP, meaning that each meta-router will probably not be able to own its own set of physical links. The SPP gets around this dilemma by supporting meta-interfaces which are simply logical interfaces overlaid on the physical ones. Each fastpath can request some of the available bandwidth on any of the physical interfaces, giving a meta-interface the illusion of having its own dedicated link with a minimum bandwidth guaranteed.

In the platform world, virtualization allows multiple operating systems to run on the same physical resources. The SPP brings the same functionality to the router world. The platform allows users to use multiple, configurable routers, possibly implementing different routing protocols, on one physical device without interfering with one another and while still guaranteeing minimum bandwidth requirements.

A Look inside the SPP

[pic]

Figure 7: The internal components of the SPP. The hardware components are the gray rectangles while the small ovals represent the various pieces of control software. There can be multiple NPEs, multiple GPEs, and multiple line cards, but only a single CP.

The SPP is meant to be easily reproducible at a reasonable cost. For that reason, it uses only off the shelf hardware components that can be purchased by anyone. For an in depth discussion of the actual components used, see [JT07].

What makes the platform work is the control software which allows the hardware to cooperate in ways that have never been done before. Specifically, as is shown in the yellow bubbles in Figure 5, there are 3 main software components: the Resource Management Proxy or RMP, the System Resource Manager or SRM, and the Substrate Control Daemon or SCD.

Arguably, the most important piece of software in the platform is the System Resource Manager. Responsible for all resource allocations, the SRM is the only component that has global knowledge of the system state. Because of its role, there is only a single SRM running on a single control processor in every SPP. The other software daemons, the SCD and the RMP, simply implement the mechanisms necessary to enforce the resource constraints imposed by the SRM.

On the user side of the SPP sits the RMP. Every time a slice application makes a request, the message has to pass through the RMP. As a result, an instance of the RMP runs on every General Processing Engine. When an application wants to allocate resources or collect data from its fastpath, it forwards the request to the RMP which then takes the necessary actions on the slice’s behalf. If the request deals with resources (allocating a fastpath, queues, filters, etc.), the RMP sends a message to the SRM. In the case that the request manipulates a user’s fastpath, the RMP forwards the request directly to the SCD on the NPE where the user’s fastpath resides.

The purpose of the RMP is two-fold. First, it provides a layer of security between user level applications and the rest of the system. Second, the RMP serves as a level of abstraction between users and the internal workings of the SPP. As a result, a slice does not need to know any global state information. Instead, it only worries about itself. As a simple example, consider filter management on the SPP. Normally, there are many slices allocated on any given GPE, and each may request any number of filters. The control software identifies a filter using a unique 16-bit integer. The RMP allows a slice to have a local set of filter identifiers that it automatically translates into global ID numbers. Without this level of indirection, a slice would have to deal with filters containing possibly bizarre identification numbers. More importantly, without the RMP, one of the other software components would have to provide an enforcement mechanism that would prevent the manipulation of a filter by a slice that does not own that filter.

The SCD lies on the other side of the SPP serving as the controller for the NPEs and the line cards. Normally, network processing engines are not designed to be shared. They have one set of hardware resources (usually a TCAM, SRAM, and a number of network processors) that are only meant to be used by a single application. The SCD running on every NPE allows the SPP to break this paradigm. It divides the available resources into chunks which can then be assigned to multiple user fastpaths. The SCD knows what assignments to make by communicating with the RMP and the SRM. Essentially, this gives each user the ability to “own” hardware for specialized processing, a feature that was impossible on the original PlanetLab Platform. Every line card also uses a version of the SCD. The software provides the mechanisms necessary to allow users to install filters for directing packets. Whenever a packet enters the SPP, the SCD in the line card ensures the data is forwarded to the correct location.

Capabilities of a Slice

Overview

Similar to other PlanetLab nodes, each user of the SPP is allocated a slice that runs in a virtual machine on the general processing engine. The GPE supports a standard Linux environment making application development relatively straight forward. To access the capabilities of the SPP, an application uses a Unix Domain Socket (UDS) located in the /tmp directory of the slice environment. When a slice is created, the RMP opens the UDS on behalf of the slice, and the socket remains open for the duration of the slice’s life.

To communicate with the RMP, and thus the rest of the SPP, the slice transmits control messages that follow a well-defined interface and waits for a reply. On success, the RMP returns a message with an operation code of zero. Any error results in the RMP returning a non-zero code, with an application specific error code encapsulated within the message. The capabilities of the RMP-Slice interface, including a description of all the functions available to a user, are discussed below. Note, all the functions described here are available in the Slice-RMP library. See the library documentation for a discussion about the exact function parameters and for sample code that implements an IPv4 meta-router.

RMP-Slice Interface

Managing Fastpaths

To use the resources available on the NPEs, an application calls the alloc_fastpath function. The first parameter the user specifies is the type of routing protocol the fastpath will implement. Also known as a code option, the first version of the SPP only supports IPv4, but future versions will support i3 and others. Next, the user specifies the bandwidth requirements for the meta-router and the number of filters, queues, buffers, and stats it will use. Filters are used to control how packets are processed in the NPE, queues and buffers are used to store the packets, and stats (short for statistics) are counters that an application can query to gather information about the traffic flowing through the fastpath. Finally, each meta-router has the ability to write and read blocks of memory; thus, the last parameter the user specifies is how much SRAM and DRAM he or she plans on using.

If alloc_fastpath completes successfully, it will return an fpInfo_t structure. Included within the structure is a fastpath identification number, the IP address of the NPE that the fastpath is running on, two communication endpoints, and two sockets (A communication endpoint is simply an IP address, a port number, and a protocol number.). One of the communication endpoints, as well as one of the sockets, is for local delivery traffic, while the other endpoint and socket are for exception traffic.

Note: the local delivery and exception frameworks are parts of the design that are currently in flux. Expect them to change in the near future.

To better understand the local delivery and exception framework, consider Figure 8 above. Using filters, a user can install an entry in the NPE’s forwarding table that will direct matching traffic from the NPE to the user’s slice. The NPE sends the packets to an IP address and port number specified in the local delivery endpoint returned from the alloc_fastpath command. The user can listen on the local delivery socket to receive packets from the NPE. After doing any required custom processing, an application can then use the NPE IP address to transmit the packets back to the fastpath.

The exception framework is exactly the same as that for local delivery, except only packets generated from error conditions are transmitted from the NPE to the slice. For example, if a packet matches no entry in the NPE’s forwarding table, the resulting error condition will show up at the exception socket of the slice.

When a user application wants to send packets back from the GPE, it has to go through a few extra steps to allow the fastpath to do anything meaningful with them. To explain why, it is necessary to briefly discuss how the platform handles data.

Whenever the SPP receives a packet destined for a slice on the GPE, it attaches several data fields to facilitate various processing routines. In the case of a packet travelling to an IPv4 meta-router that implements meta-interfaces using UDP tunnels, the SPP attaches the “meta-net header” shown in Figure 9. When a slice receives data on either the local delivery or the exception socket, the packet has the same form. Thus, any user application has to bypass the meta-net header to process the original datagram. The same is true for packets the slice transmits. They all have to be encapsulated in meta-net datagrams to be processed correctly by the SPP.

Among its many uses, the tunnel header allows the NPE to recognize which meta-interface the packet was received on. This vital information helps the NPE associate a packet with the correct filter. Unfortunately, this association can not be applied to packets traveling back from the GPE simply because there is no meta-interface defined for communication between the GPE and the fastpath. If a user’s application does not handle this situation, the NPE will associate the datagram with the wrong meta-interface, and the data will not be forwarded correctly.

To get around this issue, the SPP gives the user the ability to install what is called a “substrate-only” filter (named because lookups on the filter only result in internal control information). Then, to complete a lookup on a datagram, the fastpath uses a special forwarding key that the user attaches to every outgoing packet. If the lookup completes successfully, the fastpath uses the substrate-only filter information to forward the packet.

Thus, for a user to employ the full capabilities of slice side packet processing, he or she has to install a substrate-only filter, and every outgoing packet has to have several extra fields. Those fields make up the forwarding key required by the fastpath. In the case of an IPv4 meta-router that implements meta-interfaces using UDP tunnels, the outgoing packet will look like Figure 10. The destination IP address and the source and destination port numbers describe both ends of the UDP tunnel. Note that when using substrate-only lookups, the fields in the meta-net header do not matter. The fastpath only uses the forwarding key to determine where to send the packet.

Freeing a fastpath is much less complicated than allocating one. All the user has to do is call the free_fastpath function specifying the fastpath ID returned from the previous alloc_fastpath invocation.

Getting Interface Information

A physical interface on the SPP is described using the following parameters: an IP address, a 2 byte identification number, the interface type, the total bandwidth of the link, and the total available bandwidth of the link. While most of these are self-explanatory, several warrant discussion, namely the interface type and the link bandwidth.

There are two types of interfaces on the SPP: peering and non-peering. When an interface is non-peering, it is the same as a standard internet link, whereas one that is peering has a defined endpoint at both ends. This may seem confusing because in reality, every valid interface has a defined endpoint at both ends. What is different about a peering interface on the SPP is that the platform knows the address of a peer at the other end. Thus, all traffic that uses the interface is automatically encapsulated in packets that are directed towards the peer. As a result, when an application uses a peering interface, it never needs to tell the traffic where to go; it will always be sent to the interface peer. This is exactly like a UDP networking application that uses the Linux connect function. After calling connect and specifying a destination address, the app will no longer have to tell transmitted traffic where to go. Linux will automatically direct all traffic transmitted through the socket to whatever address was specified during the call to connect. In contrast, the user of a non-peering interface on the SPP always has to manually direct traffic by installing filters (described below). Drawing from the Linux programming analogy again, this would be the equivalent of a UDP application calling the sendto function and specifying a destination address every single time it tries to transmit traffic.

When SPPs are deployed, peering and non-peering interfaces will give users an easy way to communicate with other SPP nodes (as depicted in Figure 11). The idea is that a node will have the ability to define direct logical links to other nodes. Then, if a user wants to send traffic from one node to a slice instance on another node, all he or she will have to do is use the appropriate peering link and provide a destination port number without worrying about specifying a forwarding IP address for any of the traffic. Keep in mind that in a real system, the P2P interface will most likely be supported by a low level mechanism (either on the link or the physical layer) that will provide a minimum bandwidth guarantee between the peers.

The next interface parameter, link bandwidth, is a much simpler concept. Every interface has a set amount of physical bandwidth. Users can request a share of that bandwidth for their meta-interfaces. This is why there are two bandwidth parameters, total link bandwidth and available bandwidth. While an interface might have a total amount of bandwidth of 10 Mbps, other users might have already reserved 5 Mbps. Thus, the amount of bandwidth available to new users is only 5 Mbps.

A slice can obtain information about the interfaces connected to the SPP by using any of the following functions: get_interfaces, get_ifn (short for get interface number), get_ifattrs (get interface attributes), and get_ifpeer. Note that get_ifpeer will only return a valid IP address if the interface is peering.

To reserve or release interface bandwidth, a slice application calls resrv_fpath_ifbw and reles_fpath_ifbw. Note that the user must reserve enough bandwidth on the available interfaces to accommodate the sum of the bandwidth requirements of all of his or her meta-interfaces.

Managing Meta-interfaces

As discussed previously, each slice has the ability to allocate meta-interfaces with a guaranteed amount of bandwidth for use by a fastpath. These meta-interfaces are overlaid on the physical ones, meaning the total amount of available bandwidth is limited. In the first version of SPP, the logical interfaces are implemented using UDP tunnels. Thus, to allocate or free a meta-interface, a user calls the alloc_udp_tunnel and free_udp_tunnel functions. To obtain information about any previously allocated meta-interface, the user calls get_endpoint specifying the meta-interface number. In the future, the SPP creators plan to support a generic tunnel infrastructure that is not limited to UDP. For now, however, users will have to make do with what is available.

Managing Communication Endpoints

The SPP allows users to have packets forwarded from a linecard directly to their slice on the GPE. Called allocating an endpoint, the user employs the alloc_endopint and free_endpoint functions to enable this capability. At first glance, allocating an endpont seems to provide the functionality as the local delivery socket opened with the allocation of every fastpath. The difference is the path the packets take.

The local delivery socket receives packets forwarded from the NPE. It is used for in-band communication with the slice. This means that only packets traveling over a meta-interface and through a meta-router can use the socket. In contrast, allocating an endpoint bypasses the NPE completely. It allows the user to communicate with a slice without going through a fastpath. Figure 12 shows the path a packet would take using both the local delivery and the endpoint pathways.

What could alloc_endpoint be used for? If a user wanted to run a daemon on his or her slice that listened for incoming traffic, it would be impossible without this functionality. Every line card on the SPP implements Network Address Translation (or NAT) meaning that internal slices do not automatically have access to external ports. Because most daemons listen on well defined port numbers (for example all web servers use port 80), NAT creates a dilemma. Most routers allow users to get around this problem using port forwarding. When enabled, all the traffic sent to the desired port is automatically forwarded to a specified machine. With the SPP, the situation is no different. When an application calls alloc_endpoint , it specifies a communication endpoint. All traffic arriving at that endpoint is automatically forwarded from the line card to the user’s slice.

Filters

Just as the heart of any router is its forwarding table, the key component of any fastpath is its set of filters. In a regular router, packets enter the router, and the network processor does a lookup into its forwarding table based on the packet’s headers. Then, based on the result of the lookup, the router forwards the packet to the appropriate output queue. In a meta-router running on the NPE, the process is exactly the same. The only difference is that packets arrive on meta-interfaces as opposed to physical ones, and the forwarding table consists of a set of filters specified by the user.

A filter consists of three principle components. First, depicted in Figure 11, is the key structure the network processor uses to match incoming packets. Included in the structure is the meta-interface the packet was received on and the type of lookup to perform. Normally, the lookup type bit is 0, meaning the filter is to be used with lookups on packets originating externally to the SPP. Setting the bit to 1 causes the NPE to use the filter to match packets that originate internally. For example, when a slice tries to transmit packets using the local delivery socket (probably packets the slice received and then did custom processing on), the packets will arrive back at the user’s fastpath in the NPE. Then, in order for the NPE to figure out where to send the packet, it will use a user installed filter with the type bit set to 1.

The final component of the key structure is an N-byte value specific to whatever protocol the meta-router happens to be implementing. As a consequence of the first version of the SPP only supporting IPv4 routers, the N-byte value will consist of a destination IP address, a source IP address, source and destination ports, and a protocol field as shown in Figure 13.

Because the SPP uses general match filters, any part of the N-byte protocol specific value can be used to match incoming packets. The NPE decides which part of the N-byte field to use by employing an N-byte bit mask specified by the user. For example, to only match incoming IPv4 packets based on their destination IP address, the user would specify a mask with the a value of 0xFFFFFFFF for the first four bytes, and zero for the rest.

The difference between the matching system the NPE uses and the matching system of other routers is that the user has to configure everything by hand. Normally, standard IPv4 routers implement longest prefix matching automatically. This means that if the headers of an incoming packet match multiple forwarding table entries (this is possible because of the way the header bit masks are configured), the processor will use the entry whose match has the longest bit length. For example, if an incoming packet has a destination IP address of 10.1.2.1, and one table entry matches 10.X.X.X (the X’s indicate a bit mask of 0 in those bit positions), and another entry matches 10.1.X.X, the router will choose the second entry to make forwarding decisions. The same concept is possible in the meta-routers, but the user has to configure it using filter precedence. In the NPE, filters with a lower filter identification number have a higher precedence. When users configure filters, they can specify the filter ID. Thus, to configure a meta-router so one filter takes precedence over another, the user simply gives the first filter a lower ID number.

Once the network processor completes a lookup, its job is only half finished. It still needs to decide what to do with a packet. The user specifies this functionality in a filter result structure depicted in Figure 14. Again, this structure is protocol specific, but for IPV4 using UDP tunnels, it consists of a meta-interface to transmit the packet on, a destination IP address and port to address the packet to, an egress queue to place the packet in, and a statistics index (discussed below).

Filter configuration can be accomplished using any of the following library functions: write_fltr, update_result, get_fltr_byfid, get_fltr_bykey, lookup_fltr, rem_fltr_byfid, rem_fltr_bykey. Each uses a combination of the parameters described above.

Queue Management

While the forwarding table is arguably the most important part of any internet router, no network device would be able to function without the use of packet queues. Every router has a set of ingress and egress queues, so it was essential to give a fastpath the same capability.

The queue interface for the SPP is simple and intuitive. The idea is for users to first request meta-interface bandwidth, and then to spread that bandwidth amongst their queues however they like. The system ensures users will not be able to allocate more queues than they requested during alloc_fastpath, and it ensures that no user will be able to allocate more queue bandwidth than he or she requested on meta-interfaces. When requesting a queue, users specify queue identification numbers that they can subsequently use to configure filters.

Users manage queues using the following library functions: bind_queue, set_queue_params, get_queue_params, get_queue_len.

Statistics Gathering

Using the statistics interface, a user can see all the traffic flowing through a specific fastpath. Essentially, a statistic is simply a set of counters that keep track of certain events within the meta-router. In the NPE, statistics are implemented as an array. Each entry in the array consists of a set of four counters as shown in Figure 15. Two of the counters increment for data before it reaches the queues, while the other two increment for data after it leaves the queues. Within those two groups, one of the counters increments by the number of bytes it has seen, while the other increments by the number of packets it has seen. To access the counters, the user specifies the index in the stats array as well as a set of flags which tells the NPE which of the four counters to return.

The most practical use of statistics is to determine how many packets (or bytes) have hit a given filter. When creating the filter, an application can specify a statistics index. Then, it can subsequently call read_stats to see how many packets have matched that filter up to that point.

A user can also configure the SPP to send periodic statistic vectors. The set of counters displayed in Figure 15 would only be one entry in such a vector. The function create_periodic allows the user to specify a time interval and the vector size. Then, the SPP will fill in a vector entry every time the given time interval elapses. Because the vector is really a ring buffer, the SPP will overwrite the first entries whenever the vector fills up.

In addition, when using create_periodic, the user can specify a UDP port number and a function callback to use when sending the statistics vector. One of the other parameters is a one bit field for the “model type”. The SPP supports two models, push and pull. The push model is exactly as was discussed previously. The fastpath will “push” statistic vectors to the slice at the specified frequency. With the pull model, an application has to explicitly call the get_periodic function to obtain the statistics vector.

The following library functions can be used to gather statistics: read_stats, clear_stats, create_periodic, delete_periodic, set_callback, get_periodic. Note that some of the parameters listed above are configured using a one byte flag field. The format of the flag is shown in Figure 16.

Reading and Writing Memory

The final capability of a user’s meta-router is to read and write chunks of SRAM. This can be accessed using the mem_write and mem_read library functions.

References

[JT07] Jonathan Turner, Brandon Heller, et al., “Supercharging PlanetLab – a High Performance, Multi-Application, Overlay Network

Platform,” ACM SIGCOMM, August 2007.

-----------------------

Figure 1: Two directly connected nodes.

Figure 2: Two nodes connected through the internet.

Figure 3: Logical overlay network.

Figure 4: A theoretical layout of PlanetLab nodes in the United States

Figure 5: This resembles a basic IPV4 router. Routers that implement other protocols implement a different forwarding table, but their components are essentially the same.

Figure 6: Multiple fastpaths running on the NPE inside an SPP node. Each fastpath is also a logical router.

Figure 8: Local delivery and exception traffic is sent to the slice via an internal UDP tunnel (represented by the solid and dashed lines above).

Figure 11: Logical depiction of a peering and a non-peering interface.

Figure 12: Shown above are the different routes that packets can take. The solid path would be used by packets arriving on a meta-interface which match a local delivery filter in the NPE. The dashed line represents the route exercised by packets addressed to this SPP node after the user has called alloc_endpoint.

Figure 13: The format of the lookup key structure with an IPv4 specific N-byte value.

Figure 14: The format of the filter result structure for an IPv4 meta-router using UDP tunnels.

Figure 16: Format of the statistics flag field.

Figure 9: UDP/IPv4 meta-net packet consisting of the meta-net header and the original datagram.

Figure 10: UDP/IPv4 meta-net packet with the forwarding key attached.

Figure 15: The statistics array in the NPE.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download