Section 7.4.4.5.2
Handedness

The right vector also describes the direction to the right of the camera. It tells POV-Ray where the right side of your screen is. The sign of the right vector can be used to determine the handedness of the coordinate system in use. The default right statement is:

right <1.33, 0, 0>

This means that the +x-direction is to the right. It is called a left-handed system because you can use your left hand to keep track of the axes. Hold out your left hand with your palm facing to your right. Stick your thumb up. Point straight ahead with your index finger. Point your other fingers to the right. Your bent fingers are pointing to the +x-direction. Your thumb now points into +y-direction. Your index finger points into the +z-direction.

To use a right-handed coordinate system, as is popular in some CAD programs and other ray-tracers, make the same shape using your right hand. Your thumb still points up in the +y-direction and your index finger still points forward in the +z-direction but your other fingers now say the +x-direction is to the left. That means that the right side of your screen is now in the -x-direction. To tell POV-Ray to act like this this you can use a negative x value in the right vector like this:

right <-1.33, 0, 0>

Since x increasing to the left doesn't make much sense on a 2D screen you now rotate the whole thing 180 degrees around by using a positive z value in your camera's location. You end up with something like this.

camera { location <0,0,10> up <0,1,0> right <-1.33,0,0> look_at <0,0,0> }

Now when you do your ray-tracer's aerobics, as explained in the section "Understanding POV-Ray's Coordinate System" , you use your right hand to determine the direction of rotations.

In a two dimensional grid, x is always to the right and y is up. The two versions of handedness arise from the question of whether z points into the screen or out of it and which axis in your computer model relates to up in the real world.

Architectural CAD systems, like AutoCAD, tend to use the God's Eye orientation that the z-axis is the elevation and is the model's up direction. This approach makes sense if you're an architect looking at a building blueprint on a computer screen. z means up, and it increases towards you, with x and y still across and up the screen. This is the basic right handed system.

Stand alone rendering systems, like POV-Ray, tend to consider you as a participant. You're looking at the screen as if you were a photographer standing in the scene. Up in the model is now y, the same as up in the real world and x is still to the right, so z must be depth, which increases away from you into the screen. This is the basic left handed system.


Section 7.4.4.6
Transforming the Camera

The translate and rotate commands can re-position the camera once you've defined it. For example:

camera { location < 0, 0, 0> direction < 0, 0, 1> up < 0, 1, 0> right < 1, 0, 0> rotate <30, 60, 30> translate < 5, 3, 4> }

In this example, the camera is created, then rotated by 30 degrees about the x-axis, 60 degrees about the y-axis and 30 degrees about the z-axis, then translated to another point in space.


Section 7.4.5
Camera Identifiers

You may declare several camera identifiers if you wish. This makes it easy to quickly change cameras. For example:

#declare Long_Lens = camera { location -z*100 angle 3 } #declare Short_Lens = camera { location -z*50 angle 15 } camera { Long_Lens // edit this line to change lenses look_at Here }

Section 7.5
Objects

Objects are the building blocks of your scene. There are a lot of different types of objects supported by POV-Ray: finite solid primitives, finite patch primitives, infinite solid polynomial primitives and light sources. Constructive Solid Geometry (CSG) is also supported.

The basic syntax of an object is a keyword describing its type, some floats, vectors or other parameters which further define its location and/or shape and some optional object modifiers such as texture, pigment, normal, finish, bounding, clipping or transformations.

The texture describes what the object looks like, i. e. its material. Textures are combinations of pigments, normals, finishes and halos. Pigment is the color or pattern of colors inherent in the material. Normal is a method of simulating various patterns of bumps, dents, ripples or waves by modifying the surface normal vector. Finish describes the reflective and refractive properties of a material. The halo is used to describe the interior of the object.

Bounding shapes are finite, invisible shapes which wrap around complex, slow rendering shapes in order to speed up rendering time. Clipping shapes are used to cut away parts of shapes to expose a hollow interior. Transformations tell the ray-tracer how to move, size or rotate the shape and/or the texture in the scene.


Section 7.5.1
Empty and Solid Objects

It is very important that you know the basic concept behind empty and solid objects in POV-Ray to fully understand how features like halos and translucency are used.

Objects in POV-Ray can either be solid, empty or filled with (small) particles.

A solid object is made from the material specified by its pigment and finish statements (and to some degree its normal statement). By default all objects are assumed to be solid. If you assign a stone texture to a sphere you'll get a ball made completely of stone. It's like you had cut this ball from a block of stone. A glass ball is a massive sphere made of glass.

You should be aware that solid objects are conceptual things. If you e. g. clip away parts of the sphere you'll see that the sphere is empty, i. e. you'll clearly see that the interior is empty and it just has a very thin surface.

This is not contrary to the concept of a solid object used in POV-Ray. It is assumed that all space inside the sphere is covered by the sphere's material. Thus there is no room for any other particles like those used by fog or halos.

Empty objects are created by adding the hollow keyword (see "Hollow" ) to the object statement. An empty (or hollow) object is assumed to be made of a very thin surface which is of the material specified by the pigment, finish and normal statements. The object's interior is empty, i. e. it normally contains air molecules.

An empty object can be filled with particles by adding fog or atmosphere to the scene or by adding a halo to the object. It is very important to understand that in order to fill an object with any kind of particles it first has to be made hollow.


Section 7.5.1.1
Halo Pitfall

There is a piftall in the current empty/solid object implementation that you have to be aware of.

In order to be able to put solid objects inside a halo (this also holds for fog and atmosphere) a test has to be made for every ray that passes through the halo. If this ray travels through a solid object the halo will not be calculated. This is what anyone will expect.

The problem arises when the camera ray is inside any non-hollow object. In this case the ray is already travelling through a solid object and even if the halo's container object is hit and it is hollow, the halo will not be calculated. There is no way of telling between these two cases.

POV-Ray has to determine wether the camera is inside any object prior to tracing a camera ray in order to be able to correctly render halos when the camera is inside the container object. There's no way around doing this.

The solution to this problem (that will often happen with infinite objects like planes) is to make those objects hollow too. Thus the ray will travel through a hollow object, will hit the container object and the halo will be calculated.

Note that the above is also true for atmosphere and fog.


Section 7.5.1.2
Refraction Pitfall

There is a pitfall in the way refractive (and non-refractive translucent) objects are handled.

Imagine you want to create an object that's partially made of glass and stone. You'd use something like the following merge because you don't want to see any inside surfaces.

merge { sphere { <-1,0,0>, 2 texture { Stone } } sphere { <+1,0,0>, 2 texture { Glass } } }

What's wrong with this, you may ask? The problem is that there is no way of telling what the interior of the actual object will look like. This is not a problem of POV-Ray, it's a general problem. You can't define the interior of any object in a surface based model. You would have to create some (complex) rules to decide what the interior will look like. Is it made of stone? Is it made of glass? Is it made of some bizarre mixture between glass and stone? Is it half stone and half glass? Where is the boundary between the two materials and what does it look like?

You will not be able to answer any of the above questions by just looking at the above object. You need more informations.

If you wanted to create an object made half of stone and half of glass you would have used the following statements.

union { intersection { sphere { <-1,0,0>, 2 } plane { x, 0 } texture { Stone } } intersection { sphere { <+1,0,0>, 2 } plane { x, 0 inverse } texture { Glass } } }

This example is correct because there is one object made only of stone and one made only of glass.

You should never use objects whose interior is not well defined, i. e. there must not be different textures for the object having different refractive (and translucent) properties. You should be aware that this holds only for the lowest layer if you use layered textures.

See also "Halo Pitfall" for a similar problem with halos.


Next Section
Table Of Contents