From: daniel@alv.umd.edu (Daniel DeMenthon) Subject: Drawing with light Date: 31 Mar 92 22:26:11 GMT Organization: Center for Automation Research, Univ. of Md., College Park, MD 20742 A couple of weeks ago, I asked for a source of the pictures of Picasso drawing in space with a flashlight. I wanted to make a slide illustrating the principle of drawing in space with light, for a talk in which I describe our work with a VideoMouse, which uses small light sources and a video camera. Many thanks to all, and especially to Russell Kirsch, who gave the name of the photographer, Gjon Mili, and a Picasso retrospective reference. I found that this photographer published a book with a dozen of drawings with light, some in color. The book is *Picasso's Third Dimension*, published in 1970. Well, Picasso would probably have gone full VR 3D art if he was still around. Several of you asked for details about our project. We have applied last year for a patent for the computer vision system including the pose algorithms. You can look at it when it comes out. Meanwhile, here are some details. Our 3D mouses have a few 3 mm bulbs mounted on a transparent frame, around 5 cm apart in a noncoplanar arrangement. We have mouses with 4 light sources, and a mouse with 6 light sources. The frame is mounted at the end of a tube, and the user holds the tube as he would hold a pen. A camera looks at the user and his mouse. Cheap black and white cameras can be used (Sanyo-Fisher-Price OK). The video signal is sent to a box which detects the bright spot locations in the images. The box started with an ImageWise digitizer (an old Ciarcia design still sold by MicroMint) but has quite a few changes including a 8030 microcontroller from Signetics running at 33 MHZ and different code in ROM. The credit for the electronic boost-up goes to Yukio Fujii of Hitachi, who is working with me. The positions of the spots are sent to a Mac through a serial line. From these locations, the Mac computes the pose of the mouse in space using new fast algorithms. No trigonometry, just the multiplication of a *precomputed* matrix with vectors of image coordinates and a couple of square roots and divisions. You can find the description of the algorithms in the Proceedings of the Image Understanding Workshop, January 1992, San Diego, published by Morgan Kaufman. The paper is called *Model-Based Pose Calculation in 25 Lines of Code*, by Daniel DeMenthon and Larry Davis. I will also present this technique at the European Conference on Computer Vision in Italy on May 18th. We get new mouse poses at a rate of around 1/20 sec., and will reach 1/30 sec. soon. We will probably not try to go to 1/60 sec. because it would require a new electronic design, and we are more interested in developing software which proves the value of the concept than developing hardware. The translations and rotations of the cursor on the screen are pretty smooth already, and the response feels almost instantaneous already. The environment in which we demonstrate the mouse is still minimal; we grab a cube with faces of different colors on the screen, then move it and rotate it around. We do not try to limit image processing to predicted rectangles, because it seems more robust to process whole frames without preconceptions. The user's motions are hard to model and can be fast and jerky. We do use distances between spots in successive frames combined with other geometric information for the purpose of labelling the spots (i.e. finding which spot corresponds to which mouse LED). When a LED is hidden by the user hand, overlaps another LED, or gets out of the field of view, the screen cursor sits still. We wait until all LEDs are visible again to compute 3D poses. Here too, predictive techniques could keep the cursor running, but so far we have preferred stopping the cursor than running the chance of doing the wrong predictions. With more LEDs it would be possible to keep computing mouse poses reliably with one or two LEDs obstructed. Clearly, the problem of missing or overlapping light spots is the major drawback of our technique. 3D mouses using ultrasound triangulation probably have similar problems. I wonder if Polhemus type devices have shadowing problems too. I probably need to try all these mouses to have a feeling of what our strengths and weaknesses are in comparison. I forgot to mention, our 3 mm bulbs are nice because they emit a lot of infrared light, and B&W CCD chips are very sensitive to infrared. With the camera iris closed to the max, a video mouse works fine with all neon lights on. We cannot run with sunshine on walls behind the user, though. Infrared LEDs would be better than micro bulbs, but we could not find very small IR LEDs. LEDS of 1 or 2 mm diameter would be great, because then the chances of overlapping spots in the images are smaller. Any idea anyone where to find these? Daniel DeMenthon Computer Vision Laboratory