Article 4224 of sci.virtual-worlds:
From: jlateine@pearl.tufts.edu (JOSHUA S. LATEINER)
Subject: SCI: Voxel-VR
Organization: Tufts University - Medford, MA
Date: Wed, 20 May 1992 07:15:00 GMT
-------------
All rights are reserved, and it may not be distributed or copied without
the express permission of the Author. (C) 1992
An Alternative Approach to Computer Modeling of Reality:
Component Structure Systems
By Joshua Sebastian Lateiner
Copyright (C) April 7, 1992
ABSTRACT:
Current computer models of reality are built up from a
database of entities which consist of a set of points and a
relationship between them. These systems are best suited for
engineering analysis, as these systems lend themselves to
analysis through a set of pre-defined deterministic equations. By
describing reality as being built up from tiny indivisible units,
or "three-dimensional pixels," a model of reality can be
constructed that is more similar to the real world. Such a
system would have many potential applications, especially with
regard to simulations -- such as those used in virtual reality
models, that attempt to model complex stochastic behaviors.
HISTORY:
Most present computer aided design systems and present
virtual reality technology are all based on a method of
representing reality within a computer that evolved concurrent
with current computer technology. This system was designed for
its efficiency and utility, as it was meant to run on machines of
comparatively little computing power and memory. This method of
representing reality within a computer relies upon a vast
database that describes physical systems, rather than modeling
them. Objects are built up from wire frame entities, and then
fleshed out with artificial surfaces when necessary (this is most
often used in models of "virtual reality").
However, this method of modeling is very different from the
physical reality. Imagine if all digital images had to be defined
in the same manner now used to define three dimensional objects
within the traditional system; a picture of a human face would
need to be reduced to a bunch of points related by a number of
mathematical relationships defining splines, circles, squares and
other shapes. Physical objects in the real world do not owe
their shape to a database of points and equations, but to the
arrangement of their component structures. Component Structure
(CS) modeling is a method by which reality is modelled by a
large number of extremely simple objects. Thus, this method of
modeling creates 'billion billiard-ball' structures, made up of
a great many of these tiny components. This method is somewhat
less mathematically graceful than the traditional system, but
affords a much higher level of flexibility and complexity. This
form of modeling changes the nature of the calculations required
in analysis; for example, one would never need to verify whether
or not two objects overlap, as this method of modeling makes it
impossible for this to occur.
Depending upon the size of the individual components, this
model can approximate the accuracy of a database driven model to
any given degree, in much the same way a Riemann sum can
approximate the area under a curve to any given degree of
accuracy.
Long ago, multiplication and division used to be performed
via log tables, in order to convert these operations into simpler
ones, due to the nature of the computer being used at the time
(most of which were human). However, computers have afforded
humanity brute calculating force that is potentially unlimited.
Now that we have calculating machines, we no longer use logs to
multiply and divide. The same seems to apply to database driven
reality modeling systems. Why should one bend over backwards to
make the task "easier" than it needs to be? Why should we adhere
to a "fluid" modeling system that allows calculations to be made
with infinite precision, using advanced calculus based formula
analysis, when the world itself is discrete? Computers have
surely shown us that an algorithm that produces an answer that is
as accurate as we might want is actually more valuable then an
algorithm that can give us an infinitely accurate answer, if it
were provided with the necessary insight. Computers are not
presently good at insight, but they can do discrete computations,
so a good reality modeling system should take advantage of a
computer's strong points, rather then sell itself short in trying
to accommodate a computer's weak points.
The strength of traditional modeling systems lie in their
predictive ability, and are well suited to traditional
engineering analysis. Usually, this analysis is performed using
insight supplied by the user. However, CS systems excel in areas
that must account for actions the designer could not have
envisioned, such as those systems involved in virtual reality,
where a human being is allowed to interact intimately with
computer-generated constructs. This method of modeling is not
necessarily superior in terms of analysis, but in terms of its
intrinsic "real" properties.
The structures that are manipulated in a database driven
reality modeling system are very far removed from physical
reality, which is to the advantage of the engineer who needs
things to be described in terms of deterministic equations. If
the structures are to provide a setting for any highly complex
stochastic behavior, however, traditional systems will be at a
loss to adequately model these interactions. Component Structure
modeling, once supplied with basic, fundamental rules, will be
able to model any behavior that might occur within the reality
being modelled. For example, a traditional system might have
difficulty deciding what would happen if one were to break a
ceramic entity into pieces, and then use these pieces to etch a
line on a metal entity. This sort of behavior is impertinent to
engineering models, but of great pertinence to any sufficiently
detailed virtual reality system.
BASIC PROCEDURE:
The simplest case in which a three-dimensional entity is
stored within linear computer memory is the case in which the
entity being modelled has no special characteristics; it is built
up as if from individual grains of sand placed in a unique
pattern, in any given piece of space, there either is or is not a
grain of sand.
A transformation formula can easily convert a linear address
to a unique three-dimensional co-ordinate:
L(X,Y,Z) = X + (A * Y) + (A^2 * Z) Eq. 1
where A is the upper limit of the cube within which the point
lies.
To define a solid cube with one hundred units per edge would
require one million bits. However, data that is not within the
immediate range of the user is less likely to need to be altered
or viewed in any manner requiring much detail. Therefore,
"macropixels" are defined, which consist of a compressed version
of the original linear data, and are represented graphically as a
composite pixel based on is component pieces. For example, if
the region was half-filled, it would be represented as a uniform
solid with half the density of the original.
By compressing various chunks of the three-dimensional
matrix in a manner consistent with the demands of the user at any
given time, this system may end up being more efficient than the
database driven modeling systems. For example, when a user is
manipulating the gross attributes of a structure, database driven
systems need to redraw and maintain every entity as they are
being manipulated. However, CS modeling will allow the
clustering of related pieces together.
The clustering process is a natural extension of the
techniques used to display a digital image on a computer screen.
The information in the image may be extremely detailed, so that
if each pixel in the picture were assigned to each graphic pixel
on the display, the image would be much too large to display.
However, the computer uses standard graphics algorithms, and
assigns each graphics pixel to a cluster of pixels in the actual
image, thus each pixel displays a composite image. As the
viewer zooms in or out, data is either "clustered" or
"unclustered," with appropriate data compression taking place
regarding both pieces of the matrix that are being worked on, and
those that are currently "off camera." This process will allow a
greater efficiency in the management of memory.
An engineer might have methods of producing parts to within
a certain degree of accuracy; thus the size of the fundamental
units or "3-D pixels" would be based upon these needs. Areas of
the model not being manipulated at any given time would become
compressed into uniform macropixels; and areas that interact
would be maintained in a highly decompressed state.
The actual compression algorithms that might be used are not
pertinent to this discussion, there are many highly efficient,
commonly used algorithms currently in use, and any of these would
serve the purposes of this article. The method used to compress
the three dimensional matrix is as follows:
1) An area is selected to be compressed. In compressing an
area, it will be broken into a matrix of uniform cubes,
each of these cubes will become an individual "macropixel."
The only difference between a macropixel and a primary unit
is that the macropixel will reflect, on its faces, a
composite image of the underlying data, while a compressed
version of the actual data is stored within the area of the
matrix occupied by the macropixel. This information will
be used when the simulation needs to evaluate an event
which effects the object on a level below the detail
provided by the macropixels.
2) The pieces of the area to be compressed will always be
uniform cubes, in order for the macropixel that will
replace occupy these cubes will not have to be deformed to
occupy the same space.
3) Using equation one, an algorithm transfers the three
dimensional matrix back into a linear string.
4) The linear string is then fed to a standard compression
algorithm which compresses the data. The compressed linear
string is stored in the same way any linear data within the
matrix is stored, as a cube; however this cube will be
smaller than the original cube, and will fit well within
the area taken up by the macropixel.
If the cell being compressed is not made up of primary
units, but was already comprised of macropixels, then the
simulation would decompress the declustering strings of all of
the macropixels involved, string them together, and recompress
them as one string, which would reside inside one large
macropixel. Thus the resulting macropixel would be identical
with a macropixel that had been created directly from the primary
units in the area over which the macropixel was defined.
Declustering works the same way, by decompressing the
declustering strings, and then restringing them to the scale
desired, and forming macropixels (or primary units) from them.
MATRIX DESCRIPTION:
Macropixels and pixels alike are stored in a matrix-within-matrix
system (Lateiner-Coordinate Matrix -- LCM). The LCM is a matrix system
which allows points to be located within a vast array with a minimal
amount of information. It also allows worlds-within-worlds to come online
with a minimum of data-shuffling. The system is based around a unit cell
(a cube) with sides of length L. Every point within the unit cell is
identified with six co-ordinates, its location within the cell,
and the location of the cell in a matrix of cells that is of
exactly the same dimensions as the unit cell itself; e.g. a
matrix of unit cells is a cube containing L unit cells to a side.
Likewise, the location of each unit cell is identified in an
Origin Stack (OS) located at the (0,0,0) coordinate of the unit
cell. The OS contains unit cell's location in the matrix of unit
cells, and its location in the matrix one step higher, and so
forth. If L were equal to sixty-four units, for example, each cell
would hold over 250,000 units. With a finite number of
LCM coordinates, one can define an extremely large amount of
information. If each cell were described by ten sets of
co-ordinates (one for each of its parent matrices, each parent
matrix being held in a larger parent matrix), then the field
defined contains L^59049 units, where 59049 is 3^10.
However, not all cells need all ten co-ordinates. Every cell
contains a location descriptor as non-manifest information along
an information axis (see below) at its origin. This information
contains only the cell's co-ordinates in a matrix of cells one
step larger than itself. The same is true of parent matrices,
which hold at their origin their co-ordinates in their parent
matrices. A bit can be located anywhere within dataspace by
checking the data at the origins of matrices, starting with the
smallest cell and working outward as far as required.
The LCM method of describing three-dimensional space is
particularly useful when the environment is being shared by
several separate computers or processors which must work
together. For instance, the task of maintaining a certain area
of the matrix could be assigned to a certain processor. The non-
global definitions of the matrix which hold all of CS together
mean that each unit of the matrix does not need to know what all
the other units are doing.
CHARACTER DESCRIPTOR STRING:
Every cell might be defined, using information stored at its
origin, to have a "depth" of 'D'. This depth determines the
number of non-manifest bits in each string. Thus, every D+1 bits
are treated in the traditional fashion, as explained under Basic
Procedure. The additional D bits associated with every manifest
pixel is that pixel's character descriptor string (CDS). This
CDS may contain such information as the linkage descriptors,
color, nature of the material, etc.
This depth may be thought of as an additional position
coordinate, so that equation one becomes (where I is an
information axis, where the CDS is stored "behind" each pixel):
L(I,X,Y,Z) = I + (D*X) + (D * A * Y) + (D * A^2 * Z) Eq. 2
The information axis contains all "non-manifest"
information. Bits that lie along this axis are not shown as
being part of the three dimensional matrix, as they lie in a
limited "fourth dimension." The co-ordinate information located
at the origin of every cell is stored in this fashion.
The linkage descriptors are, in their simplest form, are six
bits which describe which of each pixel's six neighbors it is
connected to. Linking allows objects to be defined within the
matrix that can be moved as one unit. In the standard model,
without CDS's, two pixels are assumed to be linked if they are
adjacent and they are both 'on.' This permits data structures
not originally intended to be modelled in a three dimensional
model to manifest as linked, three dimensional objects (see
Dynamic Data Environments, below).
Linkage descriptors may be more complicated, though, and
contain information about the nature of these bonds, and the
forces required to break or create them. The pixel's linkage
information is the CS model's equivalent of intermolecular
forces. In evaluating the connectedness information for a macro-
pixel, a simple analysis of the individual surface pixels is
performed, and a composite connection is defined for each of the
six faces.
The process for determining if two macropixels are linked is
as follows: if the strength of the link is not specified, then a
macropixel is linked to its neighbor in any given direction of
any pixel on that face is linked to one in a neighboring face.
If the link strength is specified, then a macropixel's linkage
strength with its neighbor is a composite of the linkage
strengths of all the component pieces that border the faces of
two bordering macropixels.
The CDS may also contain information about the color of the
pixel, or any other attribute one might wish to ascribe to any
given component. Thus, any pixel in a field of depth D has the
potential to have as many characteristics as possible to fit in a
CDS of length D. The CDS's are all arranged uniformly, and are
strung together in a four-dimensional manner when a macro-pixel
is being created. This uniformity is used to great advantage, as
the redundancy inherent in such a setup is eliminated through the
compression algorithms.
DYNAMIC TECHNIQUES:
Just as a cell with a depth will contain CDS's for each
pixel, a cell which is meant to contain dynamic systems would
contain a dynamic descriptor string, DDS, of length 'X' stored
behind the CDS. Thus, the "depth" of that cell becomes D + X.
Each pixel within that cell has associated with it not only
character descriptors but dynamic descriptors such as velocity.
If two velocity vectors are stored, both the present and the one
from the preceding moment, the acceleration of the object may be
calculated. Forces, which may only be defined locally, are added
by allocating processor time; a "gravitational force" could be
brought online such that it would add to the downward velocity of
any pixel. A secured object would be affected not by moving, but
by feeling the force of the velocity such that the force equaled
the mass of the pixel (described in its CDS) times the change in
its velocity divided by time:
F = m * a a = v / t = (v2 - v1) / t
F = m * (v2 - v1) / t Eq. 3
Where F is the force, m is the mass, a is the acceleration,
and t is the time elapsed between the old velocity (v1) and the
new velocity (v2).
In this manner, any number of forces, real or imaginary, can
be modelled. This is just the beginning of the full spectrum of
Newtonian behavior and analysis that could be performed using a
CS system for dynamic modeling.
Non-deterministic events may also be modelled, such as the
shattering of an object. Fractal events are utilized to achieve
this purpose. If one were to drop a ceramic vase onto the floor
within this simulation, the point of impact would be the
originating point for an algorithm to produce fracture-force
lines that propagate fractally throughout the structure (both
the vase and the floor). If the fracture-force is larger than
the linking force at any given point along the fracture line,
then the two pieces become "unlinked."
The preceding descriptions give a glimpse of the
possibilities that are unleashed using this approach to modeling
physical systems. The essential idea is to create a system out
of a small number of basic pieces that can operate together
without human insight. The organization of these pieces should
be the part of the model that makes it interesting, not the
complexity of any given piece.
INTERFACING WITH THE MODEL:
The beauty of the CS model is perhaps most apparent when one
is confronted with the flexibility one has in manipulating and
interfacing with it. It is not centered around any particular
way of being interfaced with, and therefore is capable of being
interacted with an unlimited number of different ways. For
example, sight could be simulated by using the following
technique:
1) A point of reference for the viewer is defined.
2) Density and spread variables are defined, such that the
density is equal to the number of "photons" that will be
used, and the spread is a number defining how fast the
distribution spreads through space.
3) A pulse of photons is emitted from the reference point.
These photons proceed through space and spread out, but
remain finite in number. When they strike non-empty
space, they return the information to the source to be
interpreted.
4) If a photon has yet to strike anything after covering a
specified distance, it gives up.
5) The information returned by the photons is compiled at
the source into an image to be displayed. This can be
done in a great many different ways, too numerous to
discuss here.
6) After an initial full burst is sent out, if the initial
reference point has not changed orientation or position,
then a maintenance program is initiated which sends out a
few photons distributed randomly over the visual field.
If the information they send back is identical with that
already being stored, then the program is continued. If
it is different in any way, another full burst is sent out.
This is only one of the many different possible ways of
interfacing. A sense of touch could be incorporated, or any
number of different senses -- different ways of interacting with
the CS model; as it is "real" within the computer whether or not
it is being interfaced with.
DYNAMIC DATA ENVIRONMENT:
The CS model lends itself to many different applications.
It allows a different array of analysis options from those
offered by traditional systems. It allows highly detailed,
flexible modeling of "real" environments that are as large or
small as one might wish. In addition, the CS model may be applied
to the modeling of data structures. Any digital entity can be
represented as a structure within the CS model, as the basic
procedure is geared toward representing linear binary strings as
three dimensional entities.
This capability has enormous potential in affording computer
users a more conceptual way of handling the flow of information.
Network analysts could "see" their network in an entirely new
light. Such an interface with data constructs would be a
breakthrough on the order of the difference felt by users dealing
with a traditional text prompt or a graphic, icon based operating
systems, as it affords users a platform-environment independent
way of viewing dynamic data structures. In other words, the CS
model could be used to view data structures within any computer
currently in existence if it is accessible through a network, or
is able of running a CS model simulator so that it may view
itself.
This capability could be very important in security matters,
as the view of a network from a CS view allows network managers
to view how a network could be accessed, and tighten security
accordingly. Such security programs could be better focused, as
they could target areas of the network that requires them most,
while easing the "annoyance factor" involved when people must
work with an overly secured system on a day to day basis. Levels
and security clearance zones could be defined visually and
conceptually, without dealing with intermediary icon based
interfaces which use labels to represent data, like an icon, as
opposed to actually representing the data. Program objects could
be manipulated in front of or behind protective curtains of
Internal Security Electronics, or ISE (pronounced 'ice'). ISE
programs would prevent the unauthorized flow of data through its
boundary. The development of ISE would be facilitated by a CS
based network model.
ISE programs could consist of a program aligned in the
matrix like a three dimensional surface that would surround
protected data. Information flowing across the boundary of the
ISE would need to be authorized, otherwise it would not be
permitted across. Program and data objects, which would manifest
in the matrix as linked objects, could be manipulated and
positioned by a network administrator in front of or behind the
surface of the ISE.
CONCLUSION:
The computer revolution has taught people many lessons, one
of the most important is the following: a complex arrangement of
simple components (such as in digital circuitry) yields a more
flexible system than the simple arrangement of complex components
(such as in analog circuitry). The Component Structure method is
the "digital" equivalent of the "analog" database driven CAD
systems presently employed. However, its importance is not necessarily
in the field of engineering, which may be better suited to
deterministic, superficially oriented "wire-frame" realities.
Component Structure models shine where the initial designer is
only creating an environment, in which a conceivably infinite
number of different events might occur.
[{(<:=======--------------------- -----------------------=======:>)}]
[{( JLATEINE@PEARL.TUFTS.EDU )}]
[{ Freelance Cyberspatial Researcher at Lateiner Dataspace Group }]
[ (My opinions are mine, as they have no one else to belong to) ]
[ < 114 Edgemont Road > < Dedicated computational devices > ]
[{ < Up Montclair, NJ 07043-1328> < are asking for obselescence... > }]
[{( < 1-201-783-3488 > < Remember the dedicated word proc?> )}]
[{(<:=======--------------------- -----------------------=======:>)}]