/*      @(#)rpc_tli_design 1.11 91/03/11 SMI      */

[Please note that these are Design Notes for the implementation of TI RPC.
They have been updated to reflect the actual implementation, however please
refer to the documentation, manual pages and source code for the actual
details.]

		RPC on TLI in V.4.
		==================

+	Software archaeology.

Bob Lyon defined and implemented the original RPC package to be completely
independent of naming and binding. Then David Goldberg and he
added the portmapper and pmap code to make IP port binding easier and somewhat
standardized. Next they noticed that the common programmer did not wish to
deal with socket creation, binding or connecting. Therefore, they changed the
code to allow passing (a pointer to) a fd that could be -1;  in this case the
RPC code created the socket for the user.   Finally, over the years, both
David Goldberg and Brad Taylor added simplifying front ends that primarily
dealt with host names (rather than addresses) and a simpler way of talking
about protocols.

What is the goal here :

	To implement RPC over TLI/streams in a transport independent
	manner. Currently the implementation is upd/tcp specific. This design
	deals only with the user level RPC. The network selection details
	have not been discussed here.

		High Level Client Side RPC Interface.
		====================================

Here is a sketch of how RPC software works on top of TLI and how it interacts
with naming and binding.  There are 4 layers of client create and server create
routines.  This is the top most layer, above the network selection layer.  It
takes a "nettype" parameter which can take one of the following values:

	"netpath": Choose the transports which have been indicated by their
	netid names in the NETPATH variable. If NETPATH is not set, it
	defaults to "VISIBLE"
	"visible": Choose the transports which have the visible flag ('v')
	set in the /etc/netconfig file.
	"circuit_v": This is same as "VISIBLE" except that it chooses only the
	connection oriented transports.
	"datagram_v": This is same as "VISIBLE" except that it chooses only the
	connectionless datagram transports.
	"circuit_n": This is same as "NETPATH" except that it chooses only the
	connection oriented transports.
	"datagram_n": This is same as "NETPATH" except that it chooses only the
	connectionless datagram transports.
	"udp": This is for backward compatible with SunOS 4.0 and it refers to
	internet udp (nc_protofmly = NC_INET and nc_protoname = NC_UDP)
	"tcp": This is for backward compatible with SunOS 4.0 and it refers to
	internet tcp  (nc_protofmly = NC_INET and nc_protoname = NC_TCP)

It is exactly the earlier (sunOS 4.O) generic routine
CLIENT *
clnt_create(hostname, prog, vers, nettype)
        char *hostname;
        unsigned prog;
        unsigned vers;
        char *nettype;
extended by allowing nettype to take on new above mentioned values besides
"tcp" and "udp". Here an attempt is made to create a client handle from
among the transports belonging to the class "nettype". It tries until it
succeeds or the list of transports exhaust.


At the second layer, we have "clnt_tp_create()", which is below the network
selection layer in the sense that it is passed a netconfig structure.
CLIENT *
clnt_tp_create(hostname, prog, vers, nconf)
        char *hostname;
        unsigned prog;
        unsigned vers;
        struct netconfig *nconf;
clnt_tp_create() must rely mostly on the rpcbind code to do all the
binding work, including name mapping.

	
At the third layer we have clnt_tli_create(), which deals with netbuf
addresses.
CLIENT *
clnt_tli_create(fd, nconf, svcaddr, prog, vers, sendsize, recvsize)
	int fd;				/* This may be RPC_ANYFD */
	struct netconfig *nconf;	/* Networks address, may be NULL */
	struct netbuf *svcaddr;		/* servers address */
	u_long prog, vers;		/* program, version numbers */
	int sendsize;			/* Send Buffer size. May be 0 */
	int recvsize;			/* Recv Buffer size. May be 0 */

It is passed a file descriptor, which *may* already be open & bound and
connected.  If not, then it will attempt to open and bind it using nconf.  If
fd is RPC_ANYFD, then nconf cannot be NULL.

If svcaddr is NULL and it is connection oriented then we assume that the fd is
connected.  In the connectionless case, NULL svcaddr is not allowed.  Otherwise
if the class of transport provided is connection oriented, then this routine
attempts to connect to svcaddr.  The client op-vector and other connection
specific details are now filled in depending upon whether the fd is connection
oriented or not.  If the buffer size is 0, then appropriate value will be
obtained through t_getinfo.  The size cannot exceed the values implied by
t_getinfo().

This routine never does any RPC binding.


+	The Lower Levels

The current interfaces are:
CLIENT *clnttcp_create(addr, prog, vers, sockp, sendsz, recvsz)
CLIENT *clntudp_bufcreate(addr, prog, vers, wait, sockp, sendsz, recvsz)
For compatibility these remain and are implemented in terms of
the new, simpler, standard, tli-based clnt_tli_create().

The new lowest level expert interfaces are:
CLIENT *clnt_vc_create(fd, addr, prog, vers, sendsz, recvsz)
CLIENT *clnt_dg_create(fd, addr, prog, vers, sendsz, recvsz)
CLIENT *clnt_raw_create(prog, vers)
and these do the transport type specific setups.

The client layering is given below:

	clnt_create(host, prog, vers, nettype)
			    |
		    (Network Selection)
			    |	      
			    |
	clnt_tp_create(host, prog, vers, netconfig)
			    |
			    |
     clnt_tli_create(fd, netconfig, svcaddr, prog, vers, sendsize, recvsize)
			    |
			    |
	-------------------------------------------
	|		    			  |
	|		    			  |
  clnt_dg_create() 			clnt_raw_create()


clnt_create() level routine will be used by those applications which just care
about the type of transport (circuit vs datagram) and are not bothered about
the actual transport over which the services are rendered.  The way to specify
a particular transport is only through setting the NETPATH environment
variable.  The use of this layer will more or less provide for transport
independence.

clnt_tp_create() level routine is used by those applications, which have
decided to use a particular transport, but are not worried about the other
aspects of client creation such as bind address, receive/send buffer size.
The system chooses its own default.

clnt_tli_create() is the actual transport specific layer. Using this the
user can pass open and bound connection. This level should be used by
those applications, who dont want to go by the default specifications.
e.g. reserved addresses can be passed to this layer.

clnt_dg/vc_create() is the lowest level interface. It basically sets up
the actual CLIENT handle depending upon the class of transport. Under
normal cases, the user should never be required to use this routine.


This is basically for backword compatibility with SUNOS4.0

The ease of use routine in clnt_simple.c is
rpc_call(host, prog, vers, proc, inproc, in, outproc, out, nettype)
        char *host;
        xdrproc_t inproc, outproc;
        char *in, *out;
	char *nettype;
This will now have extended functionality compared to what it had earlier.
The nettype namespace will be used here too. The earlier ease of use routine
callrpc(host, prog, vers, proc, inproc, in, outproc, out)
has been re-written to use rpc_call() with the nettype parameter as "udp".

+	A few more clnt_controls

Because of the way routines have been layered, a few more clnt controls were
added.  One of them is to specify that the clnt_destroy should also go and
close the file descriptor (CLSET_FD_CLOSE).  A related clnt_control is
CLSET_FD_NCLOSE, where fd is not to be closed on clnt_destroy.  The other two
are CLGET_SVC_ADDR (get the svc address) and CLGET_FD (get the file descriptor
number)

+	Broadcast

This was handled by using INET_ADDR_ANY and hence the code was very "IPish".
The code in pmap_rmt.c finds the number of interfaces, and then broadcasts on
them.  Looks difficult to support this.  There is no suppport for any partial
binding.  Broadcast is now handled through the netdir_options() request to the
name-to-address translation facility.

rpc_broadcast() will now broadcast on all the networks which support
broadcasting, which can either be specified through 'b' flag in /etc/netconfig
file, which returns a list of addresses for the broadcast mechanism.  This
scheme may lead to broadcast storms specially if there are multiple broadcast
networks on the system.

+	Reserved port

This is also a very "IPish" concept and will not be supported per se. The
root authentication aspect of the reserved ports for root processes will be
replaced by transport independent mechanism. The user process can bind
it to whatever port it feels, and then call clnt_tli_create().


The new files:
	clnt_generic.c => clnt_create(), clnt_tp_create(), clnt_tli_create()
	clnt_vc.c => clnt_vc_create() and other static routines
	clnt_dg.c => clnt_dg_create() and other static routines
	rpc_soc.c => clnttcp_create(), clntudp_create() and other
			socket based compatibility routines.
	clnt_tcp.c and clnt_udp.c are now obsolete.

The CLIENT HANDLE was modified to also store the device name and the
token id of the transport on which it is created.

		RPC Server side on TLI in V.4.
		============================

First in svc.h the generic SVCXPRT must be modified to contain tli related
info:

	char *xp_tp;		/* the transport provider file name */
		/* xp_tp was added to find out the transport over which
		 * the connection has to be accepted
		 */
	char *xp_netid;		/* the token name of the transport */
	struct netbuf xp_ltaddr;	/* local endpoint address */
		/* xp_ltaddr was added to get around the disability of not
		 * being able to do equivalent of getsockname().
		 */
	struct netbuf xp_rtaddr;	/* caller's address */
		/* xp_rtaddr to store the callers netbuf address, the
		 * earlier xp_raddr is now obsolete, and so is
		 * xp_addrlen.
		 */

we continue to use most other fields, like
	int		xp_fd;		/* was earlier called xp_sock */
and mostly can use
	u_short         xp_port;         /* associated port number */
as switch that distinguishes between tli(-1) or socket usage (>=0).


The server side create routines are also layered the way the client side
is layered.

On the topmost layer we have a super create routine:
int
svc_create(dispatch, prognum, versnum, nettype)
	void (*dispatch)();
	u_long prognum, versnum;
	char *nettype;
nettype can take the values discussed earlier. It also registers the
service with the rpcbinder. It creates handles for all the transports
belonging to the given "nettype" and returns the number of handles
that it could create.

On the second layer we have a mini-super create routine:
SVCXPRT *
svc_tp_create(dispatch, prognum, versnum, nettype)
	void (*dispatch)();
	u_long prognum, versnum;
	char *nettype
This sits below the network selection layer. It registers the
service with the rpcbinder and creates a server only for the specified
transport.

On the third layer, we have
SVCXPRT *
svc_tli_create(fd, nconf, bindaddr, sendsize, recvsize)
	int fd;		/* the transport file descriptor; may be RPC_ANYFD */
	struct netconfig *nconf;	/* network description */
	struct t_bind *bindaddr;
			/* the address of the local endpoint; may be NULL */
	u_int sendsize; /* buffer size for outgoing data; may be zero */
	u_int recvsize; /* buffer size for incomming data; may be zero */

If fd is RPC_ANYFD, then netconfig structure cannot be NULL.
Like the familiar creation routines, this t_opens fd if fd is RPC_ANYFD.
This performs a t_bind using bindaddr if not already bound;
if successful, then the results are saved in the handle. If bound bindaddr
is copied to the handle.  If either sendsize or recvsize  are
zero then the value is obtained via the t_getinfo() routine.  sendsize or
recvsize cannot exceed the values implied by t_getinfo().
t_getinfo() can tell whether or not the transport is connection-full or
connection-less.  Based on this we can switch to one of two new
helper routines that fill in the op-vector, xprt->xp_ops and the other
transport specific details.

On the lowest rung of the server create ladder, we have
svc_vc_create(xprt) 	/* connection-full  create */
svc_dg_create(xprt) 	/* connection-less create */
svc_raw_create()	/* raw interface */

The earlier routines like svctcp_create() and svcudp_create() have 
been implemented on the top of svc_tli_create().

The layering is given below:

	svc_create(dispatch, prognum, versnum, nettype)
			    |
		   Network Selection
			    |
	svc_tp_create(dispatch, prognum, versnum, netconfig)
			    |
			    |
	svc_tli_create(fd, netconfig, bindaddr, sendsz, recvsz)
			    |
			    |
	-------------------------------------------
	|		    			  |
	|		    			  |
  svc_dg_create() 			svc_vc_create()


svc_create() level routine will be used by those applications which just care
about the type of transport (circuit vs datagram) and are not bothered about
the actual transport over which the services are offered.  The way to specify a
particular transport is only through setting the NETPATH environment variable.
The use of this layer will more or less provide for transport independence.

svc_tp_create() level routine is used by those applications, which have
decided to use a particular transport, but are not worried about the other
aspects of server creation such as bind address, receive/send buffer size.
The system chooses its own default.

svc_tli_create() is the actual transport specific layer. Using this the
user can pass open and bound connection. This level should be used by
those applications, who dont want to go by the default specifications.
e.g. reserved addresses can be passed to this layer.

svc_dg/vc_create() is the lowest level interface. It basically sets up
the actual SERVER handle depending upon the class of transport. Under
normal cases, the user should never be required to use this routine.



The ease of use routine in svc_simple.c,
rpc_reg(prognum, versnum, procnum, progname, inproc, outproc, nettype)
	u_long prognum, versnum, procnum;
	char *(*progname)();
	xdrproc_t inproc, outproc;
	char *nettype;
This will now have extended functionality compared to what it had earlier.
The nettype namespace will be used here too. The earlier
registerrpc(prognum, versnum, procnum, progname, inproc, outproc)
has been re-written to to use rpc_reg with "udp". This is basically for
backword compatibility with SUNOS4.0


		The new files
		=============
	svc_generic.c => svc_create(), svc_tp_create(), svc_tli_create()
	svc_vc.c => svc_vc_create() and other static routines
	svc_dg.c => svc_dg_create() and other static routines
	rpc_soc.c => svctcp_create(), svcudp_create() etc.

	svc_tcp.c and svc_udp.c are now obsolete.

		The current problems
		====================
TLI also has no support for binding it to some special reserved ports.
The user may do the binding on his own (for that particular transport)
and then call *tli_create(). (e.g. set the port number for IP separately)
This is a transport specific feature, and perhaps it is wise to keep
it outside rpc.


No partial binding.

===========================================================================
RPCBIND

RPCBIND is a rpc service which keeps track of the addresses of the
services. It was earlier known as portmap and it used to keep a
map of the port numbers, protocol, program number and version number.
The new rpcbind has been extended to take care of generic transports.
It now stores universal addresses instead of just the port number.
Rpcbind now keeps a map of addresses, network token, program number and
version number. The new interface routines are
	rpcb_set(program, version, nconf, address)
	rpcb_unset(program, version)
	rpcb_getaddr(program, version, nconf, address, host)
	rpcb_getmaps(nconf, host)
The earlier proposed addition of "instances" was dropped out. It will
be done along with the rev of the RPC protocol.

rpcb_set() and rpcb_unset() use local transports (ticlts and ticots)
to send the requests to rpcbind. Thus only local transports
can be used to set/unset the registeration; and this provides
for more security. At the moment, it uses these transports if they are
available, otherwise IP transports' loopback addresses are used (without
the additional security features).

rpcb_getaddr() opens a connection on the given network and talks to the
remote rpcbind on that transport. This scheme is quite inefficient for
connection oriented transports (e.g. tcp). Now instead of port numbers,
the entire address of the service gets passed around.

rpcbind (version 3) is also backword compatible with portmap (version 2).
The services registered with portmap also get registered with rpcbind
and vice-versa.

==========================================================================
RPCINFO

rpcinfo prints a list of the rpc services registered with the
given host. In SUN OS4.0, it prints the port number and the transport
over which the service is listening. Now, it prints the universal
address and the local transport over which the service is listening.
