From:	SMTP%"RELAY-INFO-VAX@CRVAX.SRI.COM" 25-APR-1994 11:27:50.96
To:	EVERHART
CC:	
Subj:	RE: XON/XOFF program thru LAT overruns my device

Message-Id: <9404241239.AA20433@uu3.psi.com>
Date: Sun, 24 Apr 94 08:26:44 EDT
From: Jerry Leichter <leichter@lrw.com>
To: INFO-VAX@SRI.COM
Subject: RE: XON/XOFF program thru LAT overruns my device
X-Vms-Mail-To: UUCP%"BRADLEY@evax08.pweh.utc.com"

	I'm having a problem with a program that does file transfers using an
	XON/XOFF protocol.   The way the download is supposed to happen is
	this:

	    - The machine that will receive the file sends an XON to signal
	      that it is ready
	    - The host (VAX talking through DEC terminal server) starts
	      sending the file
	    - When the receiving machine fills up with data, it sends an XOFF.
	    - The host stops sending IMMEDIATELY (within 2-15 characters)
	    - When the receiving machine is ready, it sends an XON and the
	      host picks up where it left off.

	That's how it's supposed to work.  In fact, I've had no problem doing
	this from the TT port on the back of my station.  Unfortunately, when
	I connect  using an LTA, the data from the host overflows the
	receiving machine by 200+ characters.

	Can anyone suggest how I can get this done?  Colorado swears that this
	is possible but has been unable to help.  The code is pretty long and
	would be  hard to cut down, but I'd be happy to provide it on request.
	
	My original theory was to disable flow control on the LAT port and
	then use SETMODE qios to turn off TT$_TTSYNC and turn on TT$_PASSTHRU.
	I then wait for the XON.  This part works just fine.  Once the XON is
	received, I then turn on TTSYNC with the idea that the LAT will then
	take over handling of flow control.  My support person in Colorado
	says that the TTSYNC setting should be passed on to the LAT.  This
	doesn't seem to happen.  I believe that the terminal driver is
	handling the flow control instead of the LAT, which  leads to the
	overrun.  

a)  You have little hope of doing your own XON/XOFF handling, with the kind of
timing constraints you are talking about, through a LAT link.  LAT is designed
to save bandwidth with holding off on how often it transmits.  When characters
arrive from the terminal, they are simply placed in a local buffer; their
arrival doesn't trigger an attempt to send down the link.  Instead, every
80ms the LAT server will check the buffer and try to send any characters.  At
the VAX end, the same thing will happen.  You you're *average* case round-
trip delay is 80ms or so - about 80 character times on a 9600 bps line - even
before you count in processing time at the two systems.  Your worst case
minimum delay is twice as long.

The "80ms" time is a parameter - the circuit timer - which at least in some
implementations is settable.  However, it's not clear to me that you could
set it low enough to match your constraints.  Of course, that depends on
the line speed, which you haven't told us.  However, since you are currently
seeing what are apparently typical overruns in the 200+ character range, I
doubt you could make this work.

b)  LAT implementations *do* pass the TTSYNC setting through to the server.
Many older terminals would not have sufficient buffering to deal with the
delay of processing XOFF at the host.  It's possible that you've managed to
get your system set up in a way that disables this, but I doubt it.

c)  I suspect the real origin of your problem is that you are *changing* the
TTSYNC setting.  This takes time:  The mode change packet has to be sent to
the server (with the usual up-to-80ms delay), then it has to be acted on.
It's not clear to me that these actions are necessarily done in-line with
data transmission, nor whether the server can necessarily respond to a change
mode request on a busy line - it may put it off to a more quiet time to avoid
internal synchronization problems.  TTSYNC and similar settings are normally
viewed by implementors as mainly "set and forget" kinds of things, which are
changed on timescales on the order of minutes, not milliseconds.  Optimizing
their performance is seen (rightly) as a waste of time.  I think you're lucky
this works with a direct-connected terminal line!

I suspect the only way you're going to meet your specs is to leave TTSYNC on
all the time.  If you use QIO's rather than QIOW's, VMS will automatically
resume sending when the receiver is ready, and your program can go on to do
other things (like perhaps queue up more output).  It can, of course, check
to see whether previous I/O's have completed, or be informed by AST when they
do.

The limitation is that the program would have no way of being informed of the
arrival of an XON when it was not trying to send data.  You don't describe the
protocol in sufficient detail for me to tell if that's an issue for you, but
I can't really imagine any other reason why you'd go to the trouble of turning
TTSYNC off.  There's really no good solution if this is the case.  The best I
can suggest is to see if there is some kind of no-op you can safely send the
receiver - perhaps a NUL character.  Then ensure that there is ALWAYS data -
even if a no-op - waiting to be sent when the line is XOFF'ed.  When the
receiver XON's the line, your pending I/O will complete, and you'll know you
should proceed.

d)  Be aware that Ethernet packets do get delayed and lost.  The LAT protocol
will eventually re-transmit lost packets, but the timeout involved is very
long:  By default, 1 second.  On some implementations, you may be able to
lower this, but probably not by very much.  Be *sure* that what you are doing
can survive the occassional 1-second delay.  (You mentioned in one follow-up
note that the receivers were numerically controlled machines.  If you're
trying to control motion - most especially if it's stopping motion for safety
reasons - over this link:  Forget the idea of using LAT.  It's just not suited
for this purpose.
							-- Jerry
					(Who built systems such as you are
					 working on many years ago, on RSTS,
					 in BASIC PLUS!  Fortunately, we could
					 specify the protocol on the wire.)