The Netgraph Networking System Introduction Netgraph is a modular, kernel based system for networking in FreeBSD. Netgraph supports many protocols, including raw HDLC, Frame Relay, Cisco HDLC, synchronous and asynchronous PPP, etc. Netgraph is both fast, extensible, and robust. However, it's main benefit for developers is that it is very modular and maintainable. The components in netgraph system are called nodes. Each node performs a single, relatively simple task. Nodes are connected together by joining together a pair of hooks, one from each node, forming edges in the graph, to implement more complicated protocol combinations. Data packets (contained in mbuf chains), flow from node to node, one hop at a time, with each node processing the data along the way. Netgraph also supports control messages, which are sent directly from one node to another (they don't have to be connected via hooks). Control messages are used to modify the graph, configure individual nodes, and retrieve status information. Each node is an instance of a specific node type, which defines the properties of that node. The type describes what hooks the node supports, what the node does with data received on each hook, and what control messages it understands. Netgraph is written to be fast. Everything runs at splnet() and mbuf's are passed between nodes by function calls. There are no queues or mailboxes, except when required: netgraph provides automatic queueing when sending an mbuf from some other priority level (like splimp()); nodes may also choose to use queueing for other reasons. Where can I get it? The latest netgraph tarball is here. This is version 8. What it's good for Netgraph makes setting up and combining all kinds of networking protocols very easy. For example, suppose you have a synchronous device sr0. Once the driver has been netgraph enabled (see below), this device appears as a netgraph node. Then running practically any combination of protocols over the synchronous line is easy. Configuring netgraph is done with the ngctl program. Some examples... Here is how you would set up the card for Cisco HDLC: $ ngctl mkpeer sr0: cisco rawdata downstream $ ngctl mkpeer sr0:rawdata iface inet inet $ ifconfig ng0 1.2.3.4 5.6.7.8 In the first step, the sr0 node (representing a raw synchronous interface) is connected to a new node of type cisco, which performs the Cisco HDLC protocol. The rawdata hook of the sr0 node is connected to the downstream hook of the new cisco node. Next, the cisco node is connected to a new iface node, which is both a node and a point-to-point interface. Finally, the interface associated with the iface node, which is called ng0, is configured with local and remote IP addresses using the normal ifconfig(8) command. Note that nothing in the sr0 driver needs to know anything about interfaces, Cisco HDLC, or any other protocol. Any device driver that is netgraph enabled could have Cisco HDLC running over it. Here's a slightly more complicated example. This is how you would set up the card for RFC 1490 compliant IP over a frame relay connection configured for ITU Annex A LMI using DLCI 16: $ ngctl mkpeer sr0: frame_relay rawdata downstream $ ngctl mkpeer sr0:rawdata lmi dlci0 annexA $ ngctl mkpeer sr0:rawdata rfc1490 dlci16 rfc1490 $ ngctl mkpeer sr0:rawdata:dlci16 iface inet inet $ ifconfig ng0 1.2.3.4 5.6.7.8 The frame_relay node is connected to the lmi node (which performs link management for frame relay) via DLCI 0, and to the rfc1490 node via DLCI 16. In turn, the inet output of the rfc1490 connects to the inet hook of the iface node. So a ping 5.6.7.8 would result in the IP frame being sent out the ng0 interface, where it gets RFC 1490 encapsulated, then frame relay encapsulated for DLCI 16, before finally being transmitted out on the wire. In the meantime, the lmi type node is sending and receiving periodic maintenance packets with the frame switch at the telco central office. Netgraph is also great for PPP. Right now, FreeBSD has two user mode PPP implementations and two kernel ones (async and sync). This is necessary because currently there is no easy way to modularize the various link-dependent and link-independent parts of the protocol. Netgraph can consolidate all of these, achieving fully kernel-level routing for IP traffic, while doing all the PPP control operations from a user mode daemon. A single user-mode daemon supports asynchronous serial, synchronous serial PPP, and multi-link ISDN. Until now, it has been impossible to combine the speed of the kernel implementations with the ease of maintenance of the user-mode implementations. But most importantly, netgraph makes code maintenance much easier. For example, you can perform Van Jacobsen TCP header compression over SLIP, PPP, raw HDLC, etc. As it stands now, each of those protocols has to figure out how to use and integrate the VJ compression code in the kernel. With netgraph, you can implement Van Jacobsen compression once, in a completely modular fashion. Then integrating it with other protocols becomes much easier, because it adheres to the netgraph framework, which only has to be learned once and is easy to understand. Moreover, if somebody wants to use Van Jacobsen compression in some new protocol configuration, no kernel changes are necessary; they just connect the relevant nodes together. Nobody has to muck around in the kernel to make this a supported configuration. Each node type has its own manual page describing exactly what it does. Here is the man page for the Van Jacobsen compression node type (slightly mangled by the HTML translator). Components Netgraph consists of: * The base kernel netgraph code * Implementations of the various device independent node types (described below) * Netgraph enabled ar and sr synchronous card drivers. * A user library for using the socket node type * Two user programs for configuration and monitoring, ngctl and nghook. * Thorough documentation (still being developed) Documentation Netgraph is fully documented in a series of man pages. The system itself is decribed by netgraph(4). The user library is described by netgraph(3). The ngctl configuration program is described by ngctl(3). The nghook utility program is described by nghook(3). Node types The currently implemented node types are: socket(8). The socket node type allows user-mode programs to participate in the kernel netgraph system via the BSD socket interface. iface(8). The iface node type allows netgraph data to appear at a system networking interface. It currently supports the IP, AppleTalk, IPX, and NS protocols, as well as the Berkeley Packet Filter (BPF). frame_relay(8). The frame_relay node type implements the frame relay protocol. lmi(8). The lmi node type implements the frame relay link maintenance (LMI) protocol. It knows how to do ITU Annex A, ANSI Annex D, and Group-of-Four variants, and supports LMI type auto-detection. rfc1490(8). The rfc1490 node type implementso RFC 1490 protocol multiplexing. cisco(8). The cisco node type implements the Cisco HDLC protocol. ppp(8). The ppp node type multiplexes PPP frames according to their PPP protocol number. async(8). The async node type converts asynchronous serial data into synchronous frames using PPP asynchronous encoding/decoding as described in RFC 1662. It is usually used in conjunction with the tty node type. tty(8). The tty node type is also a line discpline. It allows netgraph to transmit and receive serial data over a tty device. It is usually used in conjunction with the async node type. UI(8). The UI node type adds/extracts the unnumbered information byte 0x03 to packets. tee(8). The tee node type allows ``snooping'' on a netgraph connection. vjc(8). The vjc node type performs Van Jacobsen TCP header compression. echo(8). The echo node type is a testing/debugging node type that echos back all received packets and control messages. hole(8). The hole node type is a testing/debugging node type that silently discards all received packets and control messages. ar and sr driver nodes The ar and sr synchronous drivers have been enhanced to work with netgraph. During probing, each device instance appears as a persistent netgraph node with a single hook which can be used to transmit and receive raw frames. ether node type The supplied patches add netgraph compatibility to existing ethernet interfaces. The netgraph nodes can be configured to steal all incoming packets or just packets unclaimed by the exisiting ethernet supported protocols. Packets sent are transmitted exactly as submitted, and recieved packets are as recieved off the wire. This node type must therefore be used with a node that understands ethernet headers or that expects raw packets. Installing netgraph Right now netgraph is supplied as a collection of files and patches against FreeBSD 3.0-stable and 4.0-current. The base netgraph code, as well as each node type, can be either be compiled into the kernel or installed as a KLD module. The latest netgraph tarball is here. Work in progress A new version of mpd (a multi-link user-mode PPP daemon) supporting netgraph will be released soon. This release will still use the tunnel interface, though it will have the advantage over existing user-mode ppp in that it only does read/write operations on a per-packet basis. The next version will use netgraph iface nodes instead of tunnel interfaces; this will put it's performance on par with kernel PPP -- as all IP routing is done completely within the kernel -- with a much more flexible and extensible PPP implementation. The various PPP compression and encryption algorithms are natural candidates for new netgraph node types. These would do the actual work, while the user-mode daemon would handle the associated control protocols (CCP, ECP, etc). We need to write a SLIP node type (just haven't gotten to it yet). This should be pretty trivial. We hope to netgraph-enable other device drivers and/or encourage others to do so. Some candidates would be ATM drivers, the i4b ISDN driver code, xDSL drivers (when they exist), etc. We don't actually own any synchronous cards supported by FreeBSD, so we need testers for the ar and sr drivers. So far, the ar driver has been shown to work. We would like to netgraph enable other synchronous drivers as well. Contributors and testers are welcome!! Future directions Currently there are several device drivers in FreeBSD that present networking interfaces, or interpret protocols (like Ethernet Ethertypes). This means that lots of interface and protocol related code is duplicated all over the kernel. By using netgraph, networking device drivers could be left alone to do what they know how to do best -- talk to the hardware -- allowing the user to determine how (i.e. with what protocol) they will be used. If you want to do something wacky like PPP over 100baseT Ethernet, the kernel shouldn't be stopping you. Netgraph will make it easy to handle new protocols as they develop. For example, if the proposed Always On Dynamic ISDN (AODI) becomes popular, then by plugging together an X.25 node type with the existing PPP node type and user daemon, we've got support for it. For more information The current place to discuss netgraph is the FreeBSD networking mailing list at freebsd-net@freebsd.org. Who did it Netgraph was invented and developed by Julian Elischer and Archie Cobbs at Whistle Communications for use in the Whistle InterJet. It has been thoroughly tested by real world customers in thousands of InterJets for over two years. Whistle strongly believes in the symbiosis between open source software and industry. Check out our Open Source web site.