From: CSBVAX::CSBVAX::MRGATE::"SMTP::CRVAX.SRI.COM::RELAY-INFO-VAX" 3-MAR-1989 01:10 To: MRGATE::"ARISIA::EVERHART" Subj: Thoughts On Twisted Pair And Fiber Optic Ethernet Networks... Received: From KL.SRI.COM by CRVAX.SRI.COM with TCP; Sun, 26 FEB 89 14:02:21 PDT Received: from central.cis.upenn.edu by KL.SRI.COM with TCP; Sun, 26 Feb 89 13:35:09 PST Received: from LINC.CIS.UPENN.EDU by central.cis.upenn.edu id AA02643; Sun, 26 Feb 89 16:31:18 -0500 Received: from XRT.UPENN.EDU by linc.cis.upenn.edu id AA07918; Sun, 26 Feb 89 16:38:49 EST Posted-Date: Sun, 26 Feb 89 16:35 EDT Message-Id: <8902262138.AA07918@linc.cis.upenn.edu> Date: Sun, 26 Feb 89 16:35 EDT From: "Clayton, Paul D." Subject: Thoughts On Twisted Pair And Fiber Optic Ethernet Networks... To: INFO-VAX@KL.SRI.COM X-Vms-To: @INFOVAX,CLAYTON The recent question(s) concerning the use of the 'newer' forms of Ethernet (Enet) prompts the following. In my recent traveles, I have come across two forms of Enet that are not typical to this point in time. The first is a fiber optic based Enet from a company called Fibercom. They use fiber optic cables between 'repeaters', which also function as interface points to the network in general. This network was used for support of a large, 36 node LAVc, and several problems came to light. 1. If there is a power failure, anywhere a 'repeater' is located, the effect is a broken network. By broken I mean that systems on opposite sides of the impacted segment no longer know of each other. This is a total diaster when a LAVc is involved and the VS2000 are on one side of the power hit and the boot nodes are on the other. One reason for setting the SYSGEN parameter RECNXINTERVAL to a VERY high number. ;-). Nothing like seeing 34 nodes commit suicide, via the famous CLUEXIT. This would not happen on a thick/thin wire Enet. Given a power failure in this case, only those nodes without power would disappear, the network would remain intact and usefull. 2. The throughput appeared to be at the 10MB/s speed expected of a Enet based network. Network meters placed on it confirmed its speed. 3. There was a problem that I have blamed on the fiber network but did not fully isolate it. The problem was that for short periods of time, less then 10 seconds most of the time, ALL connections would be dropped. Terminal servers would report 'broken' connections to the VAX hosts, and LAVc nodes would lock up. This would happen at least hourly. The part that disturbed me, and made the fiber suspect was the terminal server connections being broken. These are normally hard to break and require a truly broken network for them to break. When they did break, the service names disappeared from the list of known services. The LAVc nodes on the other hand, can lock up or lose connections due to a number of things like SYSGEN parameters out of synch. When this did happen, the recovery was not pleasant. Some nodes CLUEXITed, others would be fine and come back. The terminal servers would allow you to 'c name' and continue on. For the record, I repeat that I did not get a chance to actually prove the fiber was the problem. The gear was moved out of the building before being able to do so. 4. There is also the problem of LED's not putting out enough light to drive the signal over the optic cable, or flaws in the cables, or to many taps thereby reducing the ambient light available to continue the communications from one repeater to the next. On the good side. 1. There is very little problem getting this type of network to pass any TEMPEST testing that may be required. TEMPEST is a DoD requirement to reduce/eliminate RF broadcasting of signals for reception and analysis by 'unfriendlies'. 2. No problems going through an area where RF 'noise' is a problem. Since the transmission is light based, RF noise has no effect. 3. The topology is essentially a 'ring' network of fiber cable with the repeaters providing the access points. This system used two cables, one transmit and the other recieve. With the addition of another pair of cables between each repeater, the repeaters could be replaced, in the future, with FDDI equipment and support a 100MB/s transmission rate. The second, and maybe more pairs, is an item to be concerned with during the installation phase. The future, when ever that is, appears to be FDDI and four cables are needed for that. The second type of Enet that I am currently involved with is a twisted pair based system with an even 'newer' wrinkle. The network is from a company called Synoptics, and there is NO fiber optic cable in the network. The network topology is a 'star' configuration and there is what Synoptics calls a 'concentrator' box placed where ever there is a significant amount of network connections. What the concentrator provides for is the elimination of the minimum distance between backbone taps, which can also be read as the elimination of the backbone itself. At least the thick wire portion. The cable used is the 'normal' twisted pair run from a central point to each of the locations where a 'tap' would be needed. At each of the ends, there is a flat box, about 5 inches square and 1 inch high that 'talks' with the concentrator and converts the communitations from thick/thin to twisted pair for transmission to the concentrator and back out. Status lights on both the concentrator and the 'local' converter indicate if the circuit is functional. At the present time, there is a 42 node LAVc, and several standalone systems running on this network without ANY problems to date (about 3 weeks). Shortly there will be 18 clusters, each with up to 42 nodes on this same network. No problems are expected. ;-) (They never are!!) Enet bridges will be used to reduce overall network traffic as each cluster will be on its on concentrator and very little intra-cluster communications will be done. This was installed in an 'office' envirnoment. Current drawbacks/concerns. 1. Consideration for RF noise generation is always a leading issue, both in the installation and day to day use. My current concern, although it has not happened that we can tell, is with the two way radios that the guards use to mumble to one another. Nothing like several watts of radio waves being sent through the building and the twisted pair cables. Once again, we have not found this to be a problem, yet. 2. Being a star topology requires wires run from each end node to a central point. This can raise the installation cost. Good points. 1. Appears to work under a good load. A 42 node LAVc does generate Enet traffic. 2. Elinination of the thickwire backbone reduces the size of the room to hold the backbone if a lot of connections are needed. The use of the concentrator box also eliminates the drilling into the backbone for the installation of a H400X. I have always held my breath when a new H400X goes in, with the fear that the cable would be split by the drill and some serious down time would be needed while a new backbone was installed and TDR'd. Guessing which floor tile to pick up also becomes a misty memory when searching for the correct tap, or where to put a new one in. 3. Multiple concentrators can be connected through the normal thick/thin wire or any other allowable Enet hardware. This will provide for a very large network to take up very little space, as the concentrators can be rack mounted. As for the cost of both of these types of networks, I have no data. Both were installed prior to my using them, and by companies other then the one I work for. If there are any new developments with this twisted pair network, I shall write about it, as appropiate. pdc Still alive and kicking... Paul D. Clayton Address - CLAYTON%XRT@RELAY.UPENN.EDU Disclaimer: All thoughts and statements here are my own and NOT those of my employer, and are also not based on, or contain, restricted information.