Subject: [l/m 6/22/94] TPC Transaction Processing Council   (21/28) c.be FAQ
Date: 21 Mar 1996 13:25:06 GMT

21.TPC......<This panel>
22
23
24
25.Ridiculously short benchmarks
26.Other miscellaneous benchmarks
27
28.References
1.Introduction to FAQ chain and netiquette
2
3.PERFECT
4
5.Performance Metrics
6.Temporary scaffold of New FAQ material
7.Music to benchmark by
8.Benchmark types
9.Linpack
10.Network Performance
11.NIST source and .orgs
12.Benchmark Environments
13.SLALOM
14
15.12 Ways to Fool the Masses with Benchmarks
16.SPEC
17.Benchmark invalidation methods
18
19.WPI Benchmark
20.Equivalence

OLTP (On-Line Transaction Processing): Jim Gray, DEC, was Tandem, was IBM
.Has a new book out on the subject.  Also O. Serlin follows this.

TPC A and TPC B benchmarks are well documented, audited by outside firms,
and the results kept by the Transaction Processing Council.
The TPC administrator is:

   Shanley Public Relations
   777 N. First Street
   Suite 600
   San Jose, CA 95112-6311

TPC/A TPC/B TPC/C
Location: FTP from FTP.DG.COM (128.222.1.2), login anonymous, directory tpc.

TPC-C has been out for over a year now, and there are a couple of dozen
results. The FTP site has copies of the TPC-C spec.


Name            Description

Results-text    Results spreadsheet
                (format: tab separated text)
                (size: 15KB)

Results-sylk    Results spreadsheet
                (format: SYLK spreadsheet interchange format)
                (size: 26KB)

TPCARev1.1      TPC Benchmark A Standard Specification Rev 1.1
                (format: Macintosh Microsoft Word)
                (size: 153KB)

TPCARev1.1-PS   TPC Benchmark A Standard Specification Rev 1.1
                (format: Postscript)
                (size: 816KB)

TPCARev1.1-RTF  TPC Benchmark A Standard Specification Rev 1.1
                (format: Rich Text Format - interchange format)
                (size: 243KB)

TPCBRev1.1      TPC Benchmark B Standard Specification Rev 1.1
                (format: Macintosh Microsoft Word)
                (size: 126KB)

TPCBRev1.1-PS   TPC Benchmark B Standard Specification Rev 1.1
                (format: PostScript)
                (size: 993KB)

TPCBRev1.1-RTF  TPC Benchmark B Standard Specification Rev 1.1
                (format: Rich Text Format - interchange format)
                (size: 194KB)

As for the code, the benchmark specs have sample SQL codes, but they may not
help you very much. Every implementation is going to be different. 

TPC
===

The Transaction Processing Performance Council (TPC) was 
formed by eight computer hardware and software vendors for 
the purpose of developing industry standard benchmarks.  
Since its inception in August, 1988, the TPC has published 
two benchmark standards (TPC-A and TPC-B) and currently has 
approximately 40 members.  

The TPC Benchmark A Standard provides the following 
description of the benchmark:

  TPC Benchmark A is stated in terms of a hypothetical 
  bank.  The bank has one or more branches.  Each branch 
  has multiple tellers.  The bank has many customers, each 
  with an account.  The database represents the cash 
  position of each entity (branch, teller, and account) 
  and a history of recent transactions run by the bank.  
  The transaction represents the work done when a customer 
  makes a deposit or a withdrawal against his account.  
  The transaction is performed by a teller at some branch. 

  TPC Benchmark A exercises the system components 
  necessary to perform tasks associated with that class of 
  on-line transaction processing (OLTP) environments 
  emphasizing update-intensive database services.  Such 
  environments are characterized by:

  * Multiple on-line terminal sessions
  * Significant disk input/output
  * Moderate system and application execution time
  * Transaction integrity

  The metrics used in TPC Benchmark A are throughput as 
  measured in transactions per second (tps), subject to a 
  response time constraint; and the associated price-per-
  tps.

TPC-B is primarily a database stress test.  It uses the same 
transaction as TPC-A, but excludes the terminals, networking, 
and think time.  The database scales differently than TPC-A.  
The metrics used in TPC-B are the same as TPC-A: throughput 
and price-per-tps.  TPC-A and TPC-B results are not 
comparable.

A recent list of member companies is shown below:

Amdahl, AT&T/NCR, Australian Government, Bull S.A., Compaq, 
Computer Associates, Control Data Corp., Data General, 
Digital Equipment Corp., EDS, Encore, Fujitsu/ICL, Hewlett 
Packard, Hitachi Ltd., IBM, Informix, Ingres, Intel Corp., 
ITOM International Corp., KPMG Peat Marwick, MIPS, Mitsubishi 
Electric Corp., NEC Corp., OKI Electric Industry, Olivetti, 
Oracle, Pyramid Technology, Red Brick Systems, Sequent 
Computer, Sequoia Systems, Siemens Nixdorf, Silicon Graphics, 
Software A.G., Solbourne, Stratus Computer, Sun Microsystems, 
Sybase, Tandem Computers, Teradata, Texas Instruments, Unify 
Corp., Unisys

There are currently about 80 TPC-A results and 50 TPC-B 
results.  (These are the current results, others have been 
withdrawn, typically because they are outdated.)  All TPC 
benchmark results are, by definition, public.  Each result is 
required to be documented by a full disclosure report which 
is kept on file by the TPC Administrator and available to the 
general public.  

The TPC is currently working on two new benchmark standards.  
TPC Benchmark C (TPC-C) is currently available for public 
review.  This benchmark models an order-entry application.  
Public review comments should be returned to the TPC 
Administrator by March 6, 1992.

Membership in the TPC is open to any organization or 
individual.  Annual dues are $7500.  The council meets 
several times per year to work on new benchmark standards.  

For more information about the TPC, contact the TPC 
Administrator:

Kim Shanley
Shanley Public Relations
777 N. First St.,  Suite 600
San Jose, CA  95112
Phone: (408) 295-8894
FAX: (408) 295-2613

TPC A

TPC A is a formalization by the Transaction Processing Council of
the Debit-Credit benchmark.  The TPC was established in 1988 by a
group of major database vendors to standardize transaction
processing benchmarks..TPC-A differs from Debit-Credit in the
following (although Debit-Credit was frequently honored in the
breach):

    exponential arrival times

    ten second think time (implies 10 terminals per TPS)

    90% of transactions complete in 2 seconds

    allows LAN or X.25 connections

    response time is measured at the terminal

    formal requirements for atomicity, consistency, isolation,
     and durability


TPC-A results must include a rather elaborate disclosure report,
including configuration and environment.  Results must include
both transactions per second and total 5-year cost per
transaction per second.

TPC B

TPC B is a formalization by the Transaction Processing Council of
the TP1 benchmark.  TPC B is a batch benchmark and does not
include user think time or communications overhead.  Transactions
are produced as rapidly as possible by a "generator".

After all:
.what is a "transaction?" (but an arbitrary definition.)

                   ^ A  
                s / \ r                
               m /   \ c              
              h /     \ h            
             t /       \ i          
            i /         \ t        
           r /           \ e      
          o /             \ c    
         g /               \ t  
        l /                 \ u
       A /                   \ r
        <_____________________> e   
                Language
 
Subject: FAQ
Date: Fri, 11 Dec 92 20:14:43 GMT
From: schreib@fzi.de

------------------------------------------------------------------------------


Description of 56 Benchmarks
============================

Here is a collection of descriptions of benchmarks I've put
together for an internal report here.  The entries in the
list below are benchmarks I've seen mentioned but do not
have descriptions of.  Some of the information here came from
the excelent FAQ posting in this group and from a similar
compilation posted by Dave Taylor.

  AGE Test Suite       
  AIM Technology Benchmark     
  bonnie   
  BYTE Unix Benchmark    
  Chalmers Workstation User's Benchmark
  Suite          
  DBMS Labs Benchmark    
  Debit-Credit benchmark     
  dhrystone        
--  fsanalyze         
  gbench   
  Gabriel   
  Gibson mix        
  GPCmark  
--  iobench   
--  iocall         
  iostone  
  iozone   
  ipbench  
  Kenbus1   
  Khornerstone        
  lhynestone        
  Linpack  
  Livermore Loops 
  Los Alamos benchmarks     
--  mendez         
  McCalpin Kernels    
  mhawstone        
  MIT Volume Stress Test     
  musbus   
--  NAS Kernels  
--  NCR benchmark        
  Neil Nelson Business Benchmark 
  nettest   
  NFSstone         
  nhfsstone        
  Object Operations Benchmark  
  parcbench         
  Performance Testing Alliance 
  Picture Level Benchmarks     
  plum benchmarks    
  RAMP-C   
  Rhealstone        
  RhosettaStone        
  SEI ADA benchmark       
  SLALOM         
--  smith          
  System Development Throughput    
  SPECmark         
  SPEC SDM 1.0 
  Stanford Small Programs Benchmark Set  
  tbench   
  ttcp          
  University of Wisconsin benchmarks  
  U.S. Steel  
  VGX benchmark        
  Whetstone        
  Workstation Laboratories benchmark  
  WPI benchmark suite   
  x11perf   
--  xbench         
--  Xlib Protocol Test Suite  
--  xstone         
  ZQH benchmark        

AGE Test Suite

A measure of X-Window system performance from AGE Logic Inc.

AIM Technology Benchmark

A commercially controlled family of benchmarks.

Suite II       Thirty-six single-threaded measures of the timing
.       of specific system functions.

Suite III      A mix of synthetic and real workloads that
.       simulate a multiuser Unix environment.  The
.       benchmark measures user throughput and response
.       time degradation under load.

Performance Report
.       Measures in the AIM performance report are derived
.       using the Suite III benchmark:

.       AIM.   overall performance, normalized with a
...   VAX 11/780 equal to 1.0

.       Users.   number of active users where response
...   time becomes unacceptable

.       throughput  peak throughput with the optimum
...   number of users

.       utilities   a measure of throughput for a mix of
...   standard Unix utilities


contact: Amy Yowell (800) 848-8649

bonnie

A I/O throughput benchmark developed by Tim Bray at Waterloo
University.  Bonnie measures filesystem performance under
conditions designed to resemble operations on large text
databases using a 100MB file.

contact: tbray@watsol.waterloo.edu

BYTE Unix Benchmark

The BYTE Magazine Unix benchmarks were last updated in July, 1991
to version 3.  The version 3 benchmark includes measurements of
double precision arithmetic (dhrystone 2 with and without
register variables), 7 arithmetic measures, system call overhead,
process creation (fork and execl), file copy throughput, pipe
throughput and context switching, and a recursive Tower of Hanoi.

Chalmers Workstation User's Benchmark Suite

A suite of benchmarks developed in 1991 by Chalmers University of
Technology in Gothenburg, Sweden.

DBMS Labs Benchmark

A benchmark produced in 1991 by DBMS Magazine to measure database
server performance in a LAN-based client-server environment.
Results are expressed as transactions per second for each of
seven characteristic application mixes: accounting, analyst,
batch reporting, data entry, financial, heavy insert, and sales
support.

Debit-Credit benchmark

Until recently Debit-Credit was the most common transaction
processing benchmark.  Debit-Credit is a stylized abstraction of
the teller support system in a multi-branch bank.  It was first
described it the article "A Measure of Transaction Processing
Power", anonymous, in the April 1985 issue of Datamation.

dhrystone

A synthetic workload developed by R.P. Wecker in 1984..Dhrystone
is patterned after the Whetstone benchmark but reflects a systems
rather than scientific workload.

available from netlib@ornl.gov; "send index from benchmark"

fsanalyze

gbench

A measure of X-Window system graphics performance.

available from uunet: comp.sources.unix; volume15

Gabriel

A LISP benchmark.

Gibson mix

Not strictly a benchmark:  J.C. Gibson of IBM in 1960 used
dynamic instruction traces of program running on the IBM 650 and
704 computers to establish the relative frequency of each machine
instruction.  He then used an appropriately weighted average of
individual instruction timings to compute ... MIPS! (actually
KIPS on those machines).

GPCmark

Graphics Performance Count, a measure of graphics performance
defined by the National Computer Graphics Association (NCGA)
measures graphic system performance in terms of measures like
polygons/second and vectors/second.

iobench

iocall

iostone

A I/O performance benchmark developed by Arvin Park at Princeton
in 1986.  It measures filesystem performance for a specific mix
of I/O sizes and operations.

contact: park@iris.ucdavis.edu

available from: nbslib@cmr.ncsl.nist.gov; "send index"

iozone

A highly portable I/O performance benchmark by Bill Norcott that
measures filesystem performance reading and writing sequential
files with a variety of block sizes.

contact: norcott_bill@tandem.com

ipbench

An image processing benchmark by Mark T. Noga at Lockheed.  This
benchmark measures the performance of 50 common image processing
opeations on 512x512 images with 8-bit pixels. This benchmark
emphasizes integer and boolean operations and program flow.

available from: nbslib@cmr.ncsl.nist.gov; "send index"

Kenbus1

Kenbus1 is half of the Systems Performance Evaluation
Cooperative's System Development - Multitasking (SDM) benchmark.
Kenbus1 is derived from the Monash University (Melbourne,
Australia) suite of Unix benchmarks (musbus version 5.2)
originally developed by Ken McDonell.  The system under
evaluation is used to execute increasing numbers of copies of a
standard workload ("script").  Throughput is measured in scripts
completed per hour, and both the throughput curve and peak value
are reported.  The script use some 18 Unix commands including cc,
cat, grep, mkdir, and rm.  The mix is designed to represent a
Unix/C research and development environment.

Khornerstone

A commercially controlled benchmark.  Results are copyrighted and
closely held by Workstation Laboratories.

contact: (214) 570-7100

lhynestone

A measure of performance for graphics systems.

Linpack

A floating point benchmark developed by Jack Dongarra at Los
Alamos National Laboratories in 1979.  The benchmark consists of
a series of Fortran kernels representing common liner algebra
matrix operations on 100x100, 300x300 and 1000x1000 matrices.
the results are in Millions of Floating Point Operations per
Second (MFLOPS).

contact: dongarra@cs.utk.edu

available from netlib@ornl.gov; "send index from benchmark"

Livermore Loops

A floating point benchmark developed at Lawrence Livermore
Laboratories.  The benchmark consists of some 50 Fortran inner
loops taken from various applications in use at the Labs in the
early '80s.

Los Alamos benchmarks

available from: nbslib@cmr.ncsl.nist.gov; "send index"

mendez

McCalpin Kernels

John McCalpin at the University of Delaware is collecting results
of a series of Fortran kernels.  These measure how well a
machine's memory system can move large uncached contiguous
structures around and the extent to which floating point
operations can be overlapped with fetches from memory..The
kernels measure time for copy, scale, add, and SAXPY operations
(c=a, c=constant*a, c=a+b, c=a+constant*b).

mhawstone

A synthetic benchmark developed by Jeff Mawhirter at Mead Data
Central.  It is intended to be more representative of a typical
application than whetstone or dhrystone.

contact: jeffm@meaddata.com

MIT Volume Stress Test

A measure of X-Window system performance.

musbus

The musbus benchmark was developed at and is available from
Monash University in Melbourne, Australia.  It simulates typical
and heavy loads on general purpose Unix machines (i.e. servers
rather than workstations).  The benchmark produces accurate
indications of process throughput and I/O bandwidth and can be
used for kernel and configuration tuning.  The current version is
5.2 and has been repackaged as the KENBUS1 benchmark in the SPEC
SDM suite.

contact:  musbus@bruce.cs.monash.edu.au
.  kenj@yarra.oz.au (to join newsletter mailing list)

available from uunet: comp.sources.unix volume11/musbus  (5.0)
.....volume12/musbus5.2
.....(upgrade kit from 5.0)

NAS Kernels

NCR benchmark

Neil Nelson Business Benchmark

The benchmark is a suite of eighteen synthetic workloads.  These
are used to measure computer performance by measuring the elapsed
time to execute various numbers of copies of the workstream.
Seven of the tests are summed to produce a CPU score and 9 are
summed to produce a Disk score.  This is a commercially
controlled benchmark.  Raw results are copyrighted and closely
held.

nettest

A derivative from ttcp.  This benchmark measures throughput
across several TCP and UDP connections at once.

NFSstone

A measure of Network File System (NFS) performance standardized
by 6 major NFS vendors in October, 1991.  The NFSstone
specification has been submitted to the Systems Performance
Evaluation Cooperative (SPEC) for inclusion in a SPEC benchmark
suite..NFSstone is based on work by Auspex.

contact: Mike Bennett (408) 492-0090

nhfsstone

A measure of Network File System (NFS) performance developed and
maintained by Legato Systems.  It measures server response time
and server load (calls per second).

available from: ngfsstone-request@legato.com
... "send unsupported nhfsstone"

Object Operations Benchmark

A measure of database performance, emphasizing data retrieval and
manipulation from a CAD perspective (i.e., lots of simple
operations, few elaborate queries).

parcbench

A benchmark written in C to measure performance of Unix System V
shared-memory multiprocessor machines.

Performance Testing Alliance

The Performance Testing Alliance is an industry group established
in mid-1990 to standardize testing of Local Area Network
performance.

Picture Level Benchmarks

A set of benchmarks developed by the Graphics Performance
Characterization group to characterize the performance of
graphics display systems.

contact: Bob Willis (NCGA) (703) 698-9600

plum benchmarks

available from uunet: comp.sources.unix; volume20

RAMP-C

A synthetic OLTP benchmark used by IBM, originally developed in
COBOL under CICS.  RAMP-C measures transactions per second (TPS);
each transaction averages 200,000 instructions and 19 I/Os.

Rhealstone

A benchmark for performance elements critical to real-time
multitasking systems.  Rhealstone measures:

    task switch time

    preemption time

    interrupt latency

    semaphore related delays

    deadlock break time

    intertask message latency


Rhealstone was first described in "Rhealstone: A Real-Time
Benchmarking Proposal", Dr. Dobb's Journal, February, 1989.

RhosettaStone

A benchmark for speech synthesis and speech recognization.

SEI ADA benchmark
.     (Tm DOD)
A set of ADA.      language benchmarks developed by the
Software Engineering Institute at Carnegie-Mellon University.

SLALOM

Developed by Gustafson et.al. at Ames Lab, Iowa State University
and described in "SLALOM: The First Scalable Supercomputer
Benchmark", Supercomputing Review, November, 1990.

available from archive at: tantalus.al.iastate.edu in /pub/Slalom

smith

System Development Throughput

Software Development Throughput (SDeT) is half of the Systems
Performance Evaluation Cooperative's System Development -
Multitasking (SDM) benchmark.  SDeT was developed from a
proprietary AT&T benchmark by the University of California at
Berkeley.  The system under evaluation is used to execute
increasing numbers of copies of a standard workload ("script").
Throughput is measured in scripts completed per hour, and both
the throughput curve and peak value are reported.  The script use
some 150 Unix commands including spell, nroff, diff, make, and
find.  The mix is designed to represent a C-based software
development environment.

SPECmark

The SPEC Benchmark Suite is the first benchmark produced by the
System Performance Evaluation Cooperative, a consortium of 22
computer manufactures..It measures integer and floating point
performance but includes little input/output.  SPEC recommends
using SPEC to measure the speed of systems in numeric intensive C
and Fortran single-user environments.  The current benchmark is
version 1.2, released in October, 1989.

Results are reported relative to a reference time derived on a
DEC VAX 11/780.  The SPECmark result is calculated as the
geometric mean of 10 individual measures.  The individual
benchmarks are:

gcc35.       This benchmark measures the time it takes the Free
.       Software Foundation's GNU C compiler (version
.       1.35) to compile 19 of its pre-processed source
.       files into Sun-3 assembler files with full
.       optimization.  This benchmark is written in C and
.       emphasizes integer arithmetic, memory, and
.       input/output (about 10% of execution time is spent
.       doing disk I/O).

espresso       This benchmark measures the time it takes
.       espresso, a program to generate and optimize
.       Programmable Logic Arrays distributed by the
.       Electrical Engineering and Computer Science
.       department of the University of California at
.       Berkeley, to process four different sets of
.       inputs..This benchmark is written in C and
.       emphasizes integer arithmetic and memory.

spice2g6       This benchmark measures the time it takes spice,
.       an analog circuit simulator developed by the
.       Electrical Engineering and Computer Science
.       department of the University of California at
.       Berkeley, to simulate a gray code circuit five
.       times.  This benchmark is written in Fortran with
.       an interface to Unix written in C and emphasizes
.       memory and program flow.

doduc.       This benchmark measures the run time of a large
.       Fortran kernel extracted from a thermo-hydraulic
.       simulation of a nuclear reactor, originally
.       written by Nhuan Doduc..This benchmark emphasizes
.       non-vectorizable 64-bit floating point arithmetic
.       and program flow.

nasa7.       This benchmark measures the run time of a
.       synthetic benchmark originally written by David
.       Bailey and John Barton for the Cray.  The
.       benchmark contains seven Fortran kernels
.       representing common scientific computations like
.       matrix multiply, complex radix 2 fast Fourier
.       transform, and Cholesky decomposition.  This
.       benchmark emphasizes vectorizable double precision
.       floating point arithmetic and memory.

li.       This benchmark measures the run time for small
.       object oriented lisp interpreter, XLISP 1.6 by
.       David Michael Betz, to solve a recursive
.       implementation of the 9-queens problem..This
.       benchmark emphasizes program flow and integer
.       arithmetic.

eqntott        This benchmark measures the run time for a program
.       developed at the University of California at
.       Berkeley that translates a Boolean equation to a
.       truth table.  The dominant operation is sorting.
.       This benchmark is written in C and emphasizes
.       program flow and memory.

matrix300      This benchmark measures the run time for a Fortran
.       implementation of the Linpack kernel routine SAXPY
.       for 300 x 300 matrices..This benchmark emphasizes
.       double precision floating point arithmetic and
.       memory..Performance is sometimes made worse by
.       optimization.

fpppp.       This benchmark measures the run time for a quantum
.       chemistry computation of the two electron integral
.       derivative for eight atoms, developed by H.B.
.       Schlegel.  This benchmark is written in Fortran
.       and emphasizes partially vectorizable double
.       precision floating point arithmetic.

tomcatv        This benchmark measures the run time for a Fortran
.       mesh generation program written by Dr. Wolfgang
.       Gentzsch.  This benchmark emphasizes fully
.       vectorizable double precision floating point
.       arithmetic and memory.


contact: SPEC care of MCGA at (703) 698-9600

SPEC SDM 1.0

System Development - Multitasking is the Systems Performance
Evaluation Cooperative's second benchmark.  It consists of two
measures: Software Development Throughput (SDeT), measuring
performance in program development applications, and Kenbus1,
measuring performance in research and development applications.

Stanford Small Programs Benchmark Set

tbench

available from uunet: comp.sources.misc

ttcp

The most commonly used measure of TCP/IP performance.  Measures
throughput on a single TCP or UDP circuit.  Results are not
verified or audited, and like TP1 the benchmark is frequently
"enhanced".

available from archive at: sgi.com in /sgi/src/ttcp.shar

University of Wisconsin benchmarks

A measure of relational database performance, said to be a good
indicator of join performance.

U.S. Steel

A set of COBOL business kernels assembled by U.S. Steel in 1965.
Results are relative to the performance of an IBM 1460; modern
mainframes score well in excess of 5,000.

VGX benchmark

Whetstone

This is a synthetic workload representing a mix of floating
point, procedure call, transcendental functions, etc. that make
up a typical scientific workload.  The original Whetstone
benchmark was written in Algol by Curnow and Wichman at the
U.K.'s National Physical Laboratory in Whetstone, England.  The
benchmark is described in "A Synthetic Benchmark", Computer
Journal, February, 1976.  Although both Fortran and C versions
exist the whetstone code is relatively old and fits entirely in
cache on most current machines.

available from netlib@ornl.gov; "send index from benchmark"

Workstation Laboratories benchmark

Workstation Laboratories measures multiuser performance using a
transaction processing benchmark similar to TP1.  The Workstation
Laboratories benchmark is modeled after TP1 but is written
entirely in C (for portability).  This benchmark is disk-
intensive.

WPI benchmark suite

A set of benchmarks developed by Worcester Polytechnic Institute
Mach Research Group to compare the performance of Unix
implementations.  The suite consists of five synthetic workloads
simulating gcc, a client-server database application, file
backup, FTP transfer, and X-windows client-server activity.  It
also includes one real benchmark which solves a mathematical
model of a jigsaw puzzle.

contact: mach@cs.wpi.edu

x11perf

Actually a performance monitor for MIT's X-Windows system rather
than a benchmark, but frequently used to gather performance
information anyway.

xbench

Xlib Protocol Test Suite

xstone

ZQH benchmark

This benchmark measures the run time for a computational fluid
dynamics code written in Fortran simulating one physical time
step for unsteady blood low near the valve leaflet in the Penn
State artificial heart.  The program is partially vectorizable
and requires some 30,000 million floating-point operations
(MFLOP) to complete.  Results run from 2 minutes on a Cray-YMP to
20 hours on a Sun-3.

