.sh 1 "Examples"
.lp
In this section we present four larger examples that together
illustrate most of the language mechanisms.
The first example presents a complete, albeit simple,
program that sorts a set of input data.
The second example presents a different sorting program
that uses an array of sieve processes.
The third example shows how a bounded buffer resource
might be written and used.
The fourth example presents an algorithm for determining
the topology of a network.
.sh 2 "Sort Program"
.lp
This single resource program
illustrates many of the sequential aspects of SR
and the use of several of the pre-defined operations,
including some of the input/output operations.
The sorter resource sorts a list of integers into non-decreasing order.
First, it prompts for the size of the list
and for each integer in the list.
Then, it outputs the original list,
sorts the list,
and outputs the sorted list.
.PS
resource sorter()
  op print_array(a[1:*] : int)
  op sort(var a[1:*] : int)
.PE
.PS
  process main_routine
    var n : int
    writes("number of integers?  ")
    read(n)
    var nums[1:n] : int   # size depends on n
    write("input integers, separated by whitespace")
    fa i := 1 to n -> read(nums[i]) af
    write("original numbers")
    print_array(nums)
    sort(nums)
    write("sorted numbers")
    print_array(nums)
  end
.PE
.PS
  # Print elements of array a
  proc print_array(a)
    fa i := lb(a) to ub(a) -> write(a[i]) af
  end
.PE
.PS
  # Sort array a into non-decreasing order
  proc sort(a)
    fa i := lb(a) to ub(a)-1,
       j := i+1 to ub(a) st a[i] > a[j] ->
           a[i] :=: a[j]
    af
  end
end sorter
.PE
.pp
Because sorter is by itself an entire program,
it neither imports nor exports any objects;
hence, it contains no interface part (spec).
Each operation defined in the resource\(em@print_array@
and @sort@\(emis implemented by a @proc@ and
has as a parameter an array whose upper bound is `*'.
Thus, the value of the upper bound
in a particular invocation of @print_array@ or @sort@
is determined by the actual argument;
the code in each @proc@ uses the pre-defined function @ub@
to determine the actual upper bound.
(Such code also uses @lb@ to get the lower bound,
although that is always one in this program.)
The parameter @a@ of @sort@ is a @var@ parameter so that changes
made to it will be copied back into the actual argument.
.pp
The main routine in @sorter@ is a single process that
reads the input then calls @print_array@ and @sort@.
The optional keyword @call@ is omitted from all the operation
calls; this choice is purely stylistic.
Note that array @nums@ is declared after its size is read;
this is permitted and results in @nums@ having a size that
is based on the input.
For-all statements are used throughout @sorter@
to range over elements in arrays.
Several different forms of for-all statements are employed.
Note in particular the one in @sort@, which contains
a such-that clause that selects specific
values of @i@ and @j@ for which to execute the assignment
that swaps @a[i]@ and @a[j]@.
In most languages, two loops enclosing an @if@ statement
would be required to program the actions of this
single for-all statement.
.sh 2 "Pipeline Sort"
.lp
The next example presents a different, parallel sorting algorithm.
It illustrates the use of dynamically created processes
and shows how operation types and capabilities can be used
to set up communication between these processes.
It also illustrates the programming paradigm called a \fIconversation\fR.
.pp
The input and output to the program are the same as in the first program.
However, sorting is performed by an array of processes
connected in a pipeline.
The @sort@ procedure calls @worker@, which returns a capability
for its @mypipe@ operation.
The @worker@ uses a @reply@ statement so that it continues
to execute after replying.
Subsequently, @sort@ uses the returned capability to
pass the worker the list of input values, one at a time.
Thus, @sort@ engages in a conversation with the @worker@
it created.
.pp
Each instance of @worker@ keeps the smallest value
it sees, and passes all others on to the next instance of @worker@.
If there are @n@ input values, a total of @n@ workers
are eventually executing.
The first instance of @worker@ sees all @n@ input values;
the last sees just one value.
After seeing the @m@ values it will receive,
each instance of @worker@ uses the @result@ operation
to send the smallest value
it saw back to @sort@.
Once @sort@ has received all @n@ results, it returns,
and then @main_routine@ prints the sorted list.
.PS
resource pipeline_sort()
  op print_array(a[1:*] : int)
  op sort(var a[1:*] : int)
  op result(pos, value : int) {send}   # used to return results
  optype pipe(value : int) {send}      # used to send values
  op worker(m : int) returns p : cap pipe {call}
.PE
.PS
  process main_routine
    var n : int
    write("number of integers? ")
    read(n)
    var nums[1:n] : int
    write("input integers, separated by whitespace")
    fa i := 1 to n -> read(nums[i]) af
    write("original numbers")
    print_array(nums)
    sort(nums)
    write("sorted numbers")
    print_array(nums)
  end
.PE
.PS
  # Print elements of array a
  proc print_array(a)
    fa i := lb(a) to ub(a) -> write(a[i]) af
  end
.PE
.PS
  # Sort array a into non-decreasing order
  proc sort(a)
    if ub(a) = 0 -> return fi
    var first_worker : cap pipe
    # Call worker; get back a capability for its pipe operation,
    #   then use the pipe to send all values in a to the worker.
    first_worker := worker(ub(a))
    fa i := lb(a) to ub(a) -> send first_worker(a[i]) af
    # Gather the results and place them in the right place in a
    fa i := lb(a) to ub(a) ->
        in result(pos,value) -> a[ub(a)+lb(a)-pos] := value ni
    af
  end
.PE
.PS
  # Worker receives m integers on mypipe from its predecessor.
  # It keeps smallest and sends others on to the next worker.
  # After seeing all m integers, worker sends smallest to sort,
  # together with the position (m) smallest is to be placed.
  proc worker(m) returns p
    var smallest : int    # the smallest seen so far
    op mypipe : pipe
    p := mypipe
    reply    # invoker now has a capability for mypipe
    receive mypipe(smallest)
    if m > 1 ->
      # create next instance of worker
      var next_worker : cap pipe   # pipe to next worker
      next_worker := worker(m-1)
      fa i := m-1 downto 1 ->
        in mypipe(candidate) ->
            # save new value if it is smallest so far;
            # send other values on
            if candidate<smallest -> candidate :=: smallest fi
            send next_worker(candidate)
        ni
      af
    fi
    send result(m,smallest)    # return smallest to sort
  end
end pipeline_sort
.PE
.pp
Above, the @worker@ processes are created dynamically so
exactly as many as are required (@n@) are created.
This necessitates the use of local operations (@mypipe@) and
capabilities for these operations.
.pp
In applications where the number of ``worker'' processes
is fixed and known in advance, a different approach could be used.
For example, suppose in the above program that exactly
@N@ values were always to be sorted (e.g., @N@ were a declared
constant rather than an input value).
Then a different way to structure the program is as follows.
Instead of having @mypipe@ be local to @worker@, we could
declare a global array of such operations:
.PS
op mypipe[1:N] : pipe
.PE
Second, we could create exactly @N@ instances of @worker@
in the main routine, either by using a @process@ declaration
with a quantifier, or by executing
.PS
fa m := 1 to N -> send worker(m) af
.PE
Then, @sort@ could use @mypipe[N]@ to send values
to the first instance of @worker@, and each instance
could use @mypipe[m-1]@ to send values to the next instance.
Thus, we would not need capability variables.
Also, we could delete the @reply@ statement
in @worker@ since no capability needs to be returned.
.sh 2 "Bounded Buffer"
.lp
The third example presents a bounded buffer resource and
shows how it might be used.
This illustrates how processes in different resources
communicate and synchronize.
Each instance of @bounded_buffer@ provides two operations:
@deposit@ and @fetch@.
A producer process calls @deposit@ to insert an item into
the buffer; a consumer process calls @fetch@ to retrieve
an item from the buffer.
Invocations of @deposit@ and @fetch@ are synchronized to
ensure that messages are fetched in the ordered in which
they were deposited, are not fetched until deposited,
and are not overwritten.
.PS
resource bounded_buffer
  op deposit(item : int)
  op fetch() returns item : int
body bounded_buffer(size : int)
  var buf[0:size-1] : int
  var count := 0, front := 0, rear := 0
.sp .5
  process worker
    do true ->
      in deposit(item) & count < size ->
            buf[rear] := item
            rear := (rear+1) % size
            count++
      [] fetch() returns item & count > 0 ->
            item := buf[front]
            front := (front+1) % size
            count--
      ni
    od
  end
end bounded_buffer
.PE
.pp
The two operations defined by @bounded_buffer@ are declared in
the spec and hence are visible outside the resource.
When an instance of @bounded_buffer@ is created,
the size the buffer is to have is passed as an argument
and an instance of the background process @worker@ is created.
This process loops around a single input statement,
which implements @deposit@ and @fetch@.
The synchronization expressions in the input statement
ensure that the buffer does not overflow or underflow
as described above.
For example, a producer is delayed if the buffer is full
and a consumer is delayed if the buffer is empty.
Note that the specification and body of @bounded_buffer@
have been combined; this is always possible when a resource
does not contain any import specifications.
.pp
The following resource outlines how instances of @bounded_buffer@
might be created and used.
It also illustrates dynamic process creation and
the use of capability variables.
.PS
resource user
  import bounded_buffer
body user()
  var bb : cap bounded_buffer
  op pc()  {send}
.sp .5
  initial
    # Create a buffer with room for 20 items.
    bb := create bounded_buffer(20)
    # Create several pc processes.
    fa i := 1 to 10 -> send pc() af
  end
.sp .5
  proc pc()
    var it : int
    # Do some deposit's and fetch's.
      ...
    bb.deposit(it)
      ...
    it := bb.fetch()
      ...
  end
.sp .5
  final
    destroy bb
  end
end user
.PE
The @user@ resource imports @bounded_buffer@ so that it can create
instances of it and invoke operations in those instances.
The @create@ statement in the initialization component
creates a bounded buffer resource with 20 elements
and assigns to @bb@
a capability for that instance; @bb@ is a resource variable
and hence is shared by all processes in user.
The initialization component then creates
10 instances of the producer/consumer process @pc@.
These processes invoke operations in the instance of
@bounded_buffer@ by using the capability stored in @bb@;
e.g., @bb.deposit@ refers to the @deposit@ operation.
The final code in user ensures that if user is destroyed,
then the @bounded_buffer@ it created will also be destroyed.
.sh 2 "Network Topology"
.lp
The final example is a program consisting of three resources.
It illustrates several advanced features of SR,
including dynamically created resources,
@optype@ declarations, local operations,
asynchronous invocation, and nested input statements.
.pp
The specific problem is:  Given a connected network of @n@ nodes,
where each node can communicate with and knows about
only its neighbors, compute the topology of the entire network.
Each node is represented by a resource.
The specific algorithm the nodes employ
is called a probe/echo algorithm.
In particular, one node starts the computation
by probing its neighbors.
Upon receipt of a probe, a node sends the probe on
to its other neighbors and then waits for them
to send back ``echoes''.
An echo returns the topology known to the echoing node.
Once a node has received echoes to all its probes,
it combines the echo information together with its
local knowledge and sends an echo back to the node that probed it.
.pp
Since the network is connected, each node will eventually
see a probe.
Thus, if all probes get echoed, the entire topology
will eventually be learned by the node that started the algorithm.
The trick is to ensure that probes get echoed.
Since the network may have well have cycles, it
is possible for a node to receive probes from more
than one neighbor.
Moreover, it is possible for two nodes to probe
each other at about the same time since neither can
know whether the other has already been probed.
If a node always sends probes to other neighbors whenever
it receives one, no echoes will get sent unless the
topology forms a tree rooted at the starting node.
The solution to this problem is for a node to echo immediately
any probe it receives after the first one.
Since the node will eventually echo the first probe
it receives with all information it has gathered,
it is sufficient to echo no information on probes
other than the first one.
.pp
Our program to solve the network topology problem
has 3 resources:  @node@, @print@, and @main@.
@node@ implements the probe/echo algorithm;
@print@ is a utility that is used by the others
to print out the topology at various times;
@main@ initializes the program.
Throughout the program, nodes are represented
by indices between 1 and @n@ and topologies are stored in
integer matrices, such as
.PS
var top[1:n,1:n] : int
.PE
with @top[i,j]=j@ if node @j@ is a neighbor
of node @i@ and @top[i,j]=0@ otherwise.
(Integer rather than boolean matrices are used
to simplify interpretation of input and output.)
.pp
The @printer@ resource is declared first since
it does not depend on (i.e., import)
any other resource.
It exports one operation, @print@, that takes a
node identification and topology and writes formatted output.
.PS
resource printer
  op print(node, topology[1:*] : int)
body printer(n : int)
.sp .5
  process mutex
    var node, top[1:n,1:n] : int
    do true ->
      receive print(node,top)
      write("    Topology Computed by Node",node)
      fa i:= 1 to n ->
        writes(i,":  ")
        fa j:= 1 to n -> writes(top[i,j]," ") af
        write()   # force newline
      af
      write()
    od
  end
.sp .5
end printer
.PE
The @print@ operation is serviced by a @receive@
statement in process @mutex@ rather than by a @proc@ to ensure that the
output from different topologies is not interleaved.
Within @mutex@, the topology to be printed is
recorded in matrix @top@.
Whereas @top@ is a matrix, the @topology@ argument to
@print@ is an arbitrary length vector since at present
the compiler does not support matrix formals with
`*' in both ranges.
When @print@ is received, however,
the vector is implicitly converted to matrix
form when stored into @top@.
.pp
The @node@ resource imports @printer@ and exports three operations.
@neighbors@ is called by @main@ to tell the
node who its neighbors are.
The @initiate@ operation in one of the nodes is called by @main@
to initiate the computation;
it returns the topology as computed by that node.
@probe@ is used by neighboring nodes to communicate with each other.
One argument to @probe@ is a capability for
an operation that is used to send an echo back to the prober.
@node@ exports an operation type, @echo_type@, that
specifies the type of the echo operation.
The actual specification for @node@ is
.PS
resource node
  import printer
  optype echo_type = (topology[1:*] : int)
  op neighbors(links[1:*] : cap node; indices[1:*] : int)
  op initiate(res topology[1:*] : int)
  op probe(from : int; echo : cap echo_type) {send}
body node(n, myid: int; pr : cap printer) separate
.PE
Note that @probe@ has the operation restriction `@{send}@'
since if @probe@ is called, deadlock could result
in our algorithm.
The spec for @node@ is declared separate from
the body so changes to the body, such as occur during
program development, do not trigger recompilation
of resources that import @node@.
.pp
The body of @node@ implements the three exported operations.
@neighbors@ and @initiate@ are each implemented by
a @proc@ since each is called just once and neither
contains critical sections.
However, @probe@ is implemented by @in@ statements within
the @probe_handler@ process to ensure that probes
are serviced one at a time.\**
.(f
\**If @probe@ were implemented by a @proc@, different instances
of that @proc@ would have to synchronize; for example, they
would have to determine which instance was handling
the first probe.
.)f
.PS
body node
  var links[1:n] : cap node
  var indices[1:n] : int
.sp .5
  # record who the node's neighbors are
  proc neighbors(Links,Indices)
    links := Links; indices := Indices
    writes("neighbors of node ", myid, ":  ")
    fa i := 1 to n st indices[i]~=0 -> writes(indices[i]," ") af
    write()
  end
.PE
.PS
  # initiate probe computation, returning the computed topology
  proc initiate(topology)
    op echo echo_type
    send probe(myid,echo)
    receive echo(topology)
  end
.PE
.PS
  # service invocations of probe
  process probe_handler
    do true ->
      in probe(from,echo_back) ->
        var mytop[1:n,1:n] : int := ([n*n] 0)
        mytop[myid,1:*] := indices
        op  echo echo_type
        var probed : int := 0

        # send probe to other neighbors
        fa k := 1 to n st k~=from and indices[k]~=0 ->
           send links[k].probe(myid,echo)
           probed++
        af

        # receive echoes and respond to other probes
        do probed>0 ->
          in echo(othertop) ->
              var ot[1:n,1:n] : int := othertop
              # combine other topology with my topology
              fa i := 1 to n, 
                 j := 1 to n st mytop[i,j]=0 and ot[i,j]~=0 ->
                     mytop[i,j] := ot[i,j]
              af
              probed--
          [] probe(from,echo_back) ->
              var empty_top[1:n,1:n] : int := ([n*n] 0)
              send echo_back(empty_top)
          ni
        od

        # send final topology to printer and echo mytop
        send pr.print(myid,mytop)
        send echo_back(mytop)
      ni
   od
  end probe_handler

end node
.PE
Note the nested input statements.
The first services the first invocation of @probe@ the
node receives; the second services subsequent invocations
of @probe@ as well as invocations of @echo@.
.pp
The final resource, @main@, initializes the computation.
It first reads the command line arguments to determine
which file contains the input data defining the topology
of the network.
Then it reads that file.
Third, @main@ creates the @printer@ and @node@ resources
and tells each node who its neighbors are.
Finally, @main@ invokes the @initiate@ operation
of the start node @sn@ and prints the topology
that is returned by that operation.
.PS
#  distributed topology computation -- probe method
#  usage: a.out datafile [startnode]
#  datafile gives number of nodes, then pairs of neighbors,
#  both as integers
.sp .5
resource main
  import node, printer
body main()
.sp .5
  process compute_topology
.sp .25
    # get file name and start node
    var pn : string(40)
    if getarg(1,pn) = EOF ->
      write(stderr,"usage: a.out datafile [startnode]"); stop
    fi
    var sn : int := 1
    getarg(2,sn)   # if unsuccessful, use initialized value of 1
.PE
.PS
    # read number of nodes and initial topology
    var f : file := open(pn,READ)
    if f = null ->
      write("Error:  cannot open file", pn); stop
    fi
    var n : int
    read(f,n)
    var initial_top[1:n,1:n] : int := ([n*n] 0)
.PE
.PS
    var i,j : int
    do true ->
      if read(f,i) = EOF -> exit fi
      if read(f,j) = EOF -> 
        write("Error:  there must be an even number of nodes")
        stop
      fi
      if i<1 or i>n or j<1 or j>n ->
        write("Error:  each node must be between 1 and n")
        stop
      fi
      initial_top[i,j] := j; initial_top[j,i] := i
    od
.PE
.PS
    # output initial topology
    write("    Initial Topology read from", pn)
    fa i := 1 to n ->
      writes(i,":  ")
      fa j := 1 to n -> writes(initial_top[i,j]," ") af
      write()
    af
    write()
.PE
.PS
    # create printer and n nodes
    var pr : cap printer
    var nc[1:n] : cap node
    pr := create printer(n)
    fa i := 1 to n -> nc[i] := create node(n,i,pr) af
    fa i := 1 to n -> nc[i].neighbors(nc,initial_top[i,1:n]) af

    var final_top[1:n,1:n] : int
    nc[sn].initiate(final_top)
    pr.print(0,final_top)      # print final topology
.sp .25
  end compute_topology
.sp .5
end main
.PE
Note how variables are declared close to where they
are used.
Also note how arrays are declared after the number of
nodes @n@ has been read so all arrays are exactly the right size.
Finally, note the use of the vector constructor
`@([n*n] 0)@' here and previously to
initialize matrices that store topologies.
.uh "Acknowledgements"
.pp
Mike Coffin and Gregg Townsend have been indispensable in
implementing the compiler and run-time support.
Irv Elshoff, Bill Mitchell, Kelvin Nilsen, and Titus Purdin
provided valuable assistance.
Stella Atkins, Nick Buchholz, Roger Hayes, Richard Schlichting,
and Fred Schneider have provided useful feedback on
the language and have thus influenced its development.
Coffin, Townsend, the students in CSc 552 and CSc 652 at Arizona,
and the students in ECS 244 and ECS 289D at Davis have helped
debug this report and suggested several improvements.
