[Previous] [Contents] [Index] [Next]

Networking

This chapter covers the following topics:

Introduction

QNX is a network-distributed operating system. Each computer QNX runs on is called a node. A single computer can be considered a one-node network.

The QNX Network Manager recognizes a wide range of protocol packets and passes them to their appropriate destinations transparently.


Note: If you need full TCP/IP-based network communications between a QNX machine and a non-QNX machine, you'll have to purchase and install TCP/IP for QNX.

Logical node IDs

All QNX nodes are assigned a unique logical node ID represented by a positive integer. The operating system uses these identifiers to communicate with other nodes on a network.

You should try to assign logical node IDs in an uninterrupted sequence starting at 1 (e.g. 1, 2, 3, 4, 5, ...). There are two reasons for this:

Logical network IDs

Nodes are connected via one or more physical networks. Each network in an installation is a separate communications link. Networks, like nodes, are assigned logical network IDs that must be unique.

A single network (the most common case) will default to logical network ID 1. If you have a second network, assign it network ID 2.

Physical node IDs

Although QNX processes deal with logical node IDs, the network cards themselves communicate with physical node IDs. This physical ID is typically contained within the network card itself. The format of the physical ID depends on the network type (e.g. Ethernet, Arcnet, Token Ring) and is invisible to most applications. Physical IDs must be unique within a network, but they don't have to be unique across all networks in an installation.

Boot servers and booting nodes

A QNX LAN is essentially a peer-to-peer network that you set up in two phases:

Any QNX node that has a hard disk may boot from its own hard disk. A boot server is simply a node that services boot requests from other nodes on a network. Usually a node is chosen to be a boot server because it's rarely rebooted.

When a node boots from the network, it obtains an OS image from the boot server. This boot image is loaded into memory at the booting machine, which typically inherits the following parameters from the downloaded boot image:

You can set up your network with multiple boot servers to provide fault-tolerance (in case your primary boot server becomes inoperative) and to distribute the boot resources to avoid boot bottlenecks (aka "bootlenecks").

How network booting works

Network cards in a booting node issue boot requests by either:

The QNX Arcnet card, which has nonvolatile RAM that can store the node ID of the server, uses the specific boot method. Most other network cards such as Ethernet and Token Ring don't have the means to store the physical node ID of the server, so they must use the broadcast method.

Network bridging between QNX LANs

Any QNX network node can act as a bridge between two different QNX networks, as long as they're both IEEE 802-type networks. For example, QNX network bridging lets you connect an Ethernet network to a Token Ring network or to an FDDI network. This is possible because QNX uses the same packet format and protocol on all IEEE 802-based networks. All you have to do is connect two different networks to the same node. The node can be a boot server or simply a booting machine.


Note: QNX can't create a bridge to an Arcnet network.

Planning your network

One network or more?

If you're setting up a small network, it will probably need only a single boot server, typically node 1. But if you're setting up a larger network that will span several departments, you should set up a boot server for each department. The users of each department will still be able to access files (subject to file permissions) in other departments.

For example, let's assume you have three departments: R&D, Marketing, and Operations. You may elect to have each department boot from its own boot server, which could also act as the department's major file server.

This setup creates a fault-tolerant environment -- a failure in one department won't stop another department from booting. For large numbers of nodes, this practice should also reduce the potential for a boot bottleneck when people in the office start work first thing in the morning.

Multiple network links

As described in System Architecture, having more than one network link in a computer provides fault-tolerance and increased throughput.

Communicating across networks

In QNX, a node can communicate with multiple networks simultaneously. Consider the following diagram where four computers are linked via two networks:

Four computers, one network

In this fault-tolerant configuration, each of the four nodes can communicate directly with any other node through either network 1 or network 2.

Compare this to the following setup, where nodes 1 and 3 are connected to separate networks, and node 2 is common to both networks. Nodes 1 and 3 aren't directly connected to each other, but they can both access node 2.

Four computers, two networks

If node 2 were connected to two IEEE 802-type networks, QNX network bridging would automatically bridge the two networks through node 2.

Setup considerations

You may want to prepare a diagram for your network, similar to the one below. This example diagram shows a network consisting of five nodes. One of the nodes is the boot server and three are nodes that will boot from and obtain their files from the boot server. Node 5 boots from its own hard disk.

Five nodes, one network

We've assigned a logical node ID to each node; the boot server is node 1. Note that node 4 will act as a bridge between the Ethernet network (logical network ID 1) and the Token Ring network (logical network ID 2).


Note: You can't boot across a bridge. In the previous diagram, for example, node 5 can't boot from node 1.

Network cards and drivers

Setting up a network typically involves installing a network card in your computer and connecting the card via a cable to other computers that have the same type of network card. Ethernet, Arcnet, and Token Ring are all popular network types.

In addition to the network card, you need the QNX network driver that supports it. All QNX network drivers are documented in the Utilities Reference. Their names start with the prefix "Net."

Cards from different vendors for the same network media (e.g. Ethernet) may require their own special drivers. But often a driver may support cards from more than one company if the cards provide a similar hardware interface.


Note: For information on the hardware installation/configuration of a network card and the physical connection between network cards, see the docs provided with the card.

Determining physical node IDs

The method you use to determine the physical node ID of a network card depends on the type of card you have. In many cases, you'll find a label with the address of the card in the box. Sometimes switches or jumpers on the card determine the physical node ID of the card.

Ethernet & Token Ring

Every Ethernet or Token Ring card is shipped with its own physical node ID built into the card. This ID, which is unique worldwide, is 48 bits long in order to conform to the IEEE 802 standard.

On Ethernet cards, the ID may be printed on a label somewhere on the card or on the box. Typically, diagnostic software is used to display the label. The address format of the ID may vary. For example, the following are all identical:

0000c0 4a9330
0000c04a9330
00 00 c0 4a 93 30
00:00:c0:4a:93:30
0000 c04a 9330

Arcnet

Every Arcnet card requires that you program a physical node ID into the card. Depending on the manufacturer, this is done by a DIP switch or is configured in nonvolatile RAM through a menu-driven interface at boot time.

The physical ID is 8 bits in length. You can't use physical ID 0, which is reserved. Since you can choose the physical node ID, we recommend that for a single Arcnet network, you make the physical node ID the same as the logical node ID. This will help keep things simple.

Assigning logical node and network IDs

Since QNX processes use logical IDs, the Network Manager (Net) must map these logical IDs to the physical IDs used by the hardware. This mapping is defined in the /etc/config/netmap file. Each line in this file defines a single-node mapping: the logical node and network IDs are followed by a physical node ID. For example, the following line would map logical node 8 on logical network 1 into a 48-bit physical ID of 0000c04a9330:

Logical and physical node IDs

You can separate the logical and physical IDs by Space or Tab characters. The logical IDs are in decimal, while the physical IDs are in hex, unless preceded by a "t", in which case they're also in decimal.

For example, in the case of Arcnet, it's convenient to express the physical node ID in decimal form:

15   1   t15

The physical ID can't exceed 48 bits. The network hardware determines the number of bits needed.

Configuring a boot server

This section shows you how to set up a single boot server. Once you've set up a single server, you can set up the booting nodes. If your site requires additional boot servers, see "Multiple boot servers."


Note: This procedure assumes that you've already done the basic installation of QNX onto your machine's hard disk as described in the Basic Installation chapter. Therefore, your machine has already been configured to boot from its own hard disk.

Turning a QNX node into a boot server involves these steps:

Step 1 -- Install the network card(s)

Turn off the computer, then install your network card according to the instructions provided with the card. If you can, note the physical node ID of the card. When you're finished, reboot the computer.


Note: If you weren't able to note the physical node ID when installing the card, or if the ID wasn't on the card, running netmap without any options (after the network driver starts) will display the physical node ID.

Installing multiple cards

When you place two similar cards in the same machine, don't rely on the default autodetect for I/O ports, etc. Be sure to verify that the settings are unique -- you might have to specify some of this information using command-line options to the network driver. You must configure the cards such that they have no hardware conflicts.

Read the technical note for the type of card(s) you're installing before you install the card. The files in the /etc/readme/technotes directory contain useful tips, such as how to set hardware interrupts so that you can avoid potential conflicts.

Step 2 -- Install your network licenses

You'll need a network license for each node in your network. When you installed QNX onto the hard disk initially, a license was installed for at least a single node. You can verify how many licenses were installed using the licinfo command. See the Licensing chapter for more information about installing and activating licenses.

Step 3 -- Start the Network Manager & network driver(s)

If you told install that this node is to be a boot server, the appropriate network services will have been started already. Skip this step and go to the next one.

Starting the Network Manager & one network driver

The Network Manager (Net) and the network driver for your network must be running to access logical network 1.

The names for all network drivers are of the form Net.xxx, where xxx represents the appropriate driver. For example, if you have an NE2000-compatible Ethernet card, you'd enter something similar to the following:

Net &
Net.ether1000 &

To find out which network drivers are available on your system, enter this command:

ls /bin/Net.*

Starting two or more network drivers

If you installed two network cards, you'll have to start a driver for the second network.

By default, all network drivers connect to logical network ID 1. When connecting multiple networks, you have to specify the -l ("el") command-line option to have a driver connect to a network other than logical network 1.

For example, the following three commands in the sysinit.node file of a boot server would start the Network Manager, an Ethernet driver on logical network 1, and a Token Ring driver on logical network 2:

Net &
Net.ether1000 &
Net.tr164a -l 2 &

Note that the Network Manager must be started with the -d option whenever three or more network drivers are started.

This command causes nettrap to scan the machine it's running on for network cards:

nettrap

Caution: The nettrap utility reads and writes to I/O ports and to memory. This testing may cause unintentional side effects in other hardware residing at the tested memory and I/O addresses. If your system is set up to control external machinery, disconnect these devices before running nettrap.

The nettrap utility can also be used to start Net along with the appropriate network drivers (and netmap -f):

nettrap start

Step 4 -- Start nameloc

The nameloc utility runs in the background and provides a network-wide naming service for all processes running under the OS. It must be running on at least one machine on the network -- even if it's the only machine in a standalone "network" -- for licensed products to work:

nameloc &

Upon starting, nameloc polls each node in the network for its list of global symbolic process names. For more information, see the description of the nameloc utility in the Utilities Reference.


Caution: The machine running nameloc must be able to talk to every other machine on the network. Make sure the machine has a complete netmap file. If you run nameloc on a node (e.g. a portable) that doesn't have complete network access, you'll run into all sorts of problems -- wrong licensing info, flawed list of global names, malfunctioning network-wide utilities, etc.

Step 5 -- Start netboot

When a booting node starts up, its boot ROM sends a boot request to its boot server or it broadcasts the boot request to any boot server that has been configured to respond. When the Network Manager on the boot server receives a boot request, it forwards the request to the netboot utility. To start netboot, type :

netboot &

When responding to a boot request, the netboot utility accesses the /etc/config/netboot file to determine which build file can be used to generate the OS image for the requesting node.


Note: Running netboot with the -v option (verbose mode) can be useful for troubleshooting. Multiple v's will increase the level of info -- you might consider directing the output to file or running it on its own console.

You may run netboot with the -a option to help automate network mapping (see Step 7):

netboot -a &

In this case, if the /etc/config/netmap file doesn't contain a mapping assignment for the requesting node, netboot will automatically write the required mapping to the /etc/config/netmap file using the next available logical node ID and the logical network ID of the network the requesting node is connected to.

Step 6 -- Modify the sysinit.node file

Now that you've started all of the required network drivers and services, the node can function as a boot server. Once you've added the network startup commands to the boot server's sysinit.node file, they'll be started automatically whenever the node is rebooted.

First, copy the boot server's sysinit.node file (e.g. sysinit.1) to altsysinit before you continue:

cp /etc/config/sysinit.1 /etc/config/altsysinit

This saves an alternate copy of the boot server's initialization file. Now add the required entries to /etc/config/sysinit.node. Entries similar to the following are required for a boot server:

Net &
Net.ether1000 &
nameloc &
netboot &

Step 7 -- Modify the netmap file

The /etc/config/netmap file is the default node and network ID mapping file used by the netboot and netmap utilities. This file defines the physical node IDs, the logical node IDs, and the logical network IDs associated with each node.

An example file is shipped with QNX. The file you use must contain the logical node and network IDs for the boot server and for all the networked nodes that must communicate with the boot server. The mapping entries in the file look something like this:

Node ID Network ID Network Card Address
1 1 boot_server_physical_address
2 1 node2_physical_address
3 1 node3_physical_address
4 1 node4_physical_address

Note that when you have a single network, the logical network ID is usually "1." All networked nodes must have a unique logical node ID. We suggest that you choose logical node ID 1 for your first (primary) boot server. In the case of a node that's connected to two or more networks, the same logical node ID will have two entries, each with a unique logical network identifier.

You can edit the netmap file manually, or if you started netboot with the -a option, the system will write node mappings to the netmap file automatically the first time you power on each node. The node you power on first will be assigned the next available logical node ID, the correct logical network ID, and the correct physical address. If you specify the -A option, new nodes are assigned the lowest unassigned logical node ID and the correct physical address.

If you have nodes that boot from their own hard disks, you can generate an updated "master" /etc/config/netmap file on node 1 as follows:

  1. At all machines except node 1, create a local netmap file containing the logical-to-physical node mapping to node 1.
  2. At all nodes except node 1, type the following command to update the in-memory network mappings on node 1:

    sin -n1
  3. To save the in-memory network mappings to disk at node 1, type:

    netmap > /etc/config/netmap

If you want to propagate the mappings to other nodes, you can copy the master file from node 1. For example, to copy the master file to node 2, type:

cp //1/etc/config/netmap //2/etc/config/netmap

Mapping multiple networks

To define a node having more than one network card, you would add one entry to correspond with each network. Remember, the logical network IDs must be unique. For example, if your network had a fifth node that was connected to two different networks, you'd add two more entries to the netmap file:

Node ID Network ID Network Card Address
5 1 node5_physical_address_card1
5 2 node5_physical_address_card2

For more information about the netmap file, see the netmap utility in the Utilities Reference. For examples showing the mappings of multiple networks, see the "Network examples" section in this chapter.


Note: Every time you edit the netmap file, you should update the in-memory network tables as well:

netmap -f

Step 8 -- Modify the netboot file

When a boot server receives a boot request from a node, the Network Manager forwards the request along with the node's physical node ID and logical network ID to the netboot utility.

With this information, netboot accesses the network mapping file (/etc/config/netmap) to determine the node's logical node ID. The utility then accesses the /etc/config/netboot file to determine which build file can be used to generate an OS image for the node.

If you're using NE2000-compatible cards, your /etc/config/netboot file on the boot server could contain a line similar to the following:

* f=build/ws.ether1000

If you specified to install initially that this node is to be a boot server, your netboot file will already contain what you need.

You can modify this f= entry if you'd like QNX to use a custom build file. For example, if you copied one of the build files shipped with QNX, renamed it to ws.mybuild, and then defined additional services in this custom build file, you would change the entry so that it looks like this:

* f=build/ws.mybuild

If a number of nodes are equipped with the same kind of network card, a single entry can be used for all of these nodes. In the example entry above, all (*) nodes on the network would be booted from the same build file (ws.mybuild).

If your boot server will be servicing boot requests from two or more different nodes and/or networks, you can specify different custom build files for some of your nodes. We've provided generic build files suitable for building OS images for different network media such as Token Ring in the /boot/build directory.

For example, let's say you have an eight-node, NE2000-compatible Ethernet network, and that these nodes boot from the netboot entry f=build/ws.ether1000. If you added three more nodes that will boot from the same server using custom build files, you could modify your /etc/config/netboot so that it looks something like this:

9    f=build/ether2100.node9
10   f=build/ws.tr16a
11   f=build/mybuild.node11
*    f=build/ws.ether1000

In the example above, nodes 9, 10, and 11 use custom build files and all other nodes use the default network build file ws.ether1000.


Note: Note that the position of the line with the * is important! Make sure it's the last line in /etc/config/netboot, because the file will stop being processed when the first match is found.

Setting up for broadcast booting

With a broadcasting boot ROM, netboot needs to know whether it should act as primary, secondary, or tertiary server with respect to each node. The utility finds out by reading an optional server mode parameter in the netboot file.

The general syntax of the netboot entries is as follows:

logical_node_ID	f=buildfile | F=imagefile  [server_mode]

where server_mode specifies the role that netboot is to play with respect to a node. If server_mode isn't specified, netboot will act as a primary boot server for the associated booting nodes.


Note: The F=imagefile option lets you name a custom pre-built OS image that can be downloaded to a node that boots from the network. By default, QNX builds an image on the fly using the specified build file. For more information about transmitting a named OS image to the booting node instead, see the section on "Booting a node from the network."

The possible values for server_mode are:

A
primary server (default)
B
secondary server
C
tertiary server

The following example tells netboot to be the primary boot server for all nodes except node 4, for which it will be a secondary boot server:

4  f=build/ws.tr16a      B
*  f=build/ws.ether1000  A

For information about running netboot on multiple boot servers, see "Multiple boot servers."

Booting a node from a QNX network

When a node that boots from a QNX network is powered on, the code in the QNX boot ROM on the machine's network card is executed. At this point, either a specific boot server is contacted (if it's a QNX Arcnet card) or a broadcast boot request is made.

The netboot process on the responding boot server generates and downloads a boot image to RAM on the booting node's network card. The sinit program will then run and begin launching the services listed in the node's sysinit.node file.

Configuring a node that boots from a QNX network involves these steps:

Step 1 -- Insert the boot ROM

To boot QNX from an Ethernet or Token Ring card, you must install a QNX boot ROM in the ROM socket on the card as described in the card's documentation.

Note that QNX Arcnet cards come with a special QNX boot ROM already installed. This boot ROM doesn't broadcast a boot request; instead, it contacts a specific boot server.

All other QNX boot ROMs use the broadcast-boot method. When the ROM attempts to boot, it broadcasts a boot request to a server it symbolically refers to as server "A" (primary server). If it gets no response, the ROM will start broadcasting boot requests to server "B" (secondary server), and finally to server "C" (tertiary server).

If two network cards with boot ROMs are installed in a node, the network card whose ROM resides at the higher memory address is the last one found during the "Power On Self Test" (POST) sequence performed by the BIOS. This card will usually be the one your node will use to send the boot request over the network.

Step 2 -- Install the network card

If you haven't done so already, read the technical note that applies to the type of card you're installing (see /etc/readme/technotes).

Follow the manufacturer's installation instructions, and remember to note the physical node ID of the card(s). When booting, the QNX boot ROM will print out the physical node ID.

Step 3 -- Construct the build file

You can customize the services that each node inherits through its build file. All nodes can have a distinct build file, which can be preconfigured and stored on the boot server (in the /boot/build directory) or built on the fly through the buildqnx utility.

By default, images are based on the build files specified in the boot server's netboot file and are built on the fly. When you create a custom operating system image, an F=imagefile entry must replace the standard f=buildfile entry in the boot server's netboot file.


Note: Net and its associated driver must be started in the build file for booting nodes. If a node will need more than two network drivers, you'll need to specify the -d option to Net in the build file. For details, see the Net documentation in the QNX Utilities Reference.

If you choose to modify a node's build file, you should make a copy of the default version and give the copy a meaningful name -- such as ws.eth1000_tr16a for a build file that will contain both Ethernet 1000 and Token Ring network drivers -- then modify the new file.

Step 4 -- Modify sysinit.node

If you'd like to customize the services available to the node, create a sysinit.node file on the boot server for the node and add the required services (see "The system initialization file" in the Basic Installation chapter).

Remember, the node suffix must be the logical node ID of the machine you're customizing.

Step 5 -- Boot the node

The node should now boot from the network. If you have any problems, specify the -v option to Net in the booting node's build file. This will display network errors on the console. It's also a good idea to use the -v option for netboot and for the network drivers (Net.*) as well.

Booting a node using a BOOTP server

To boot a QNX system using the BOOTP Internet boot protocol, you'll need a BOOTP ROM for your QNX client and a BOOTP server (bootpd) for your server. Since the TFTP protocol is used to move the image from the server to the client, you'll also need a TFTP server -- this is usually provided with a BOOTP server on most systems (UNIX, Windows 95/NT). (Our QNX 4 TCP/IP runtime package includes the bootpd server and the tftpd server.)

We support ROMs from Lanworks Technologies (www.lanworks.com). We've tested Lanworks ROMs with the Win32 BOOTP server from Weird Solutions (www.weird-solutions.com) and the QNX 4 bootpd server.

With the Lanworks BOOTP ROMs, the image is placed in memory at 0x10000, so you'll need to run buildqnx with the -b option to specify the RAM address:

buildqnx -b 0x10000

For more information on setting up your QNX 4 BOOTP servers, see the bootpd man page in the TCP/IP for QNX User's Guide.

Booting a node from its own hard disk

Your network may include a node that will boot from its own hard disk. If this is the case, you must do the following before you can boot the node:

Editing the node's build file

The first entries in a build file start the Process Manager (e.g. sys/Proc32) and set the options that Proc32 will assume. The -l ("el") option to Proc32 assigns a logical node ID to the node.

When you installed QNX on the hard disk of this node, a logical node ID of 1 was assumed. You must change the logical node ID associated with the -l option to match the node suffix of the node's sysinit.node file.

For more information about building a custom image for a node, see the "Network images" section in the chapter on building an OS image.

Copying the netmap file to the node

A machine that boots from its own hard disk needs a netmap file containing a logical-to-physical node mapping for all nodes that this machine must communicate with.

You can generate an updated "master" /etc/config/netmap file on the boot server as described in Step 7 of the "Configuring a boot server" section.

Installing or transferring a license

In order for a node to boot from its own hard disk, a license must be present on the hard disk. If you installed a new-style QNX license from the distribution media, a license for the node will already be installed in the /.licenses file. If you received a license certificate, you must add the license number to the /.licenses file. Don't forget to activate new licenses (using license -r). For more information, see the Licensing chapter.

Multiple boot servers

Since broadcasts are potentially seen by all nodes on the network, you can run netboot on several nodes, creating multiple boot servers. This lets you associate specific groups of nodes with different boot servers.

Although every netboot process usually receives each broadcast boot request, the utility responds only in accordance with its own unique /etc/config/netboot file, which defines the relationship to each requesting node (e.g. "I am node 5's secondary boot server").

No two boot servers should have the same server_mode entry for any given logical node ID. If this rule isn't followed, every boot server with the same server_mode entry for a given logical node ID will respond to that node's boot request.

Let's now look at a 15-node Ethernet network, with nodes 1 to 9 belonging to the "tech" department and nodes 10 to 15 belonging to the "marketing" department. If you wanted to make node 1 the primary server for "tech," and node 10 the primary server for "marketing," you'd run netboot on both nodes, each with its own netboot file.

On node 1 (//1/etc/config/netboot):

2  f=build/al.1000       A
3  f=build/pat.1000      A
4  f=build/celeste.1000  A
5  f=build/robin.1000    A
6  f=build/vanessa.1000  A
7  f=build/sandy.1000    A
8  f=build/john.1000     A
9  f=build/bubba.1000    A
*  f=build/ws.ether1000  B

On node 10 (//10/etc/config/netboot):

10  f=build/andrew.1000  A
11  f=build/bob.1000     A
12  f=build/liz.1000     A
13  f=build/fred.1000    A
14  f=build/joe.1000     A
15  f=build/betty.1000   A
*  f=build/ws.ether1000  B

In this case, you've also made node 10 the secondary boot server for "tech," and node 1 the secondary server for "marketing."


Caution: Make sure the line beginning with * is the last line in /etc/config/netboot, because the file will stop being processed when the first match is found.

Also, remember that the server_mode entry in the * line must specify a different role from that of all the other node entries for all the other netboots running on all the other nodes on the network.


Network examples

Now that we've discussed what's involved in setting up multiple network links, let's look at how to:

Adding several Ethernet nodes to an Arcnet network

Let's assume the following network layout:

Arcnet/Ethernet example

As you'll recall, the format of the /etc/config/netmap file is as follows:

logical_node_ID logical_network_ID physical_node_ID

For this example, the netmap file would contain the following entries:

1		1		t1
1		2		0000C0 0D9E40
2		1		t2
3		1		t3
4		1		t4
4		2		0000C0 064A2D
5		2		0000C0 7B7113

Note that the logical node IDs are unique and consistent across all interconnected networks.

In order to boot nodes on a QNX network, the netboot utility requires that each node have an associated OS image or build file. Note that nodes booting on the Arcnet network don't use the same build files as nodes booting on Ethernet. Consequently, the /etc/config/netboot file would contain the following entries:

2   f=build/ws.arcnet 
3   f=build/ws.arcnet
4   f=build/ws.arc_ether1000
5   f=build/ws.ether1000

Node 1 has no entry since it boots from its own hard disk. But its sysinit.node file must start both an Arcnet driver and an Ethernet driver.

You could configure node 4 to boot from either Arcnet or Ethernet.


Note: Which network node 4 boots from depends on which network card is found last during the ROM scan at powerup -- the highest addressed card is always found last and is the one the machine will boot from.

In this example, assume we configured node 4 to boot over Arcnet by making sure the Arcnet boot ROM is at the highest address. Since node 4 is on both Arcnet and Ethernet, the node's build file starts Net and both the Net.arcnet and Net.ether1000 drivers. This ensures the boot server can transmit the sysinit.node file on both networks.

Most build files have to start the Network Manager (Net) and the associated network drivers. If you require three or more network drivers, you'd have to specify the -d option to Net.

Setting up a fault-tolerant Ethernet network

Let's assume the following network layout:

fault-tolerant Ethernet network

The corresponding /etc/config/netmap file would contain the following entries:

1		1		0000C0 0D9E40
1		2		0000C0 064A2D
2		1		0000C0 7B7113
2		2		0000C0 109E40
3		1		0000C0 164C2D
3		2		0000C0 5C9A40
4		1		0000C0 DC5E08
4		2		0000C0 129E40

The corresponding /etc/config/netboot file would contain the following entry:

* f=build/ws.eth1000x2

Since an additional Ethernet driver is required, you need to create a custom build file (e.g. ws.eth1000x2) and add a second copy of the Net.ether1000 driver to the file, specifying the -l 2 option.

Setting up a private network link

If there's a lot of traffic between certain nodes on a network, you can set up those nodes with a private network link in order to offload the communication flow from the main network, as was done with nodes 2 and 3 in this example:

private network link


Note: You may need to "downgrade" the Ethernet driver's bit-transmission rate used between node 2 and node 3 on LAN 1. To do this:
=>> On node 2 and on node 3, run the driver for LAN 1 with the -r media_rate set to 100000 (the default is 10000000):
Net.ether1000 -l1 -r 100000 & 
Net.ether2100 -l2 & 

Network diagnostics

The Network Manager maintains a circular buffer of significant network runtime events. You can run the netinfo utility to display the most recent buffer. The output includes the time of the network event, the associated node number, the event's numerical code, and a brief explanation of what the event means.

For more information about these utilities, see their entries in the Utilities Reference.


[Previous] [Contents] [Index] [Next]