Today was the first day of the Laval Virtual show. It’s the biggest French meeting for virtual and augmented reality companies. After a full day going from one stand to the other, here is what we can say about the current state of the AR.
1. Markerless augmented reality everywhere
And when I say marker less, I really mean it. I’m not talking about the natural features tracking that still needs some sort of image or shape to give a reference point for the tracking algorithm (yes, Augment is one of those). The technologies I saw there are able to detect any known object and overlay precisely on top of it information or 3d models.
When you think about augmented reality, you think about 3D models. And at some point, those models need to be created. At Laval Virtual there were a lot of exciting 3d scanning companies. For instance, Digiteyezer announced and showcased their new face scanning tool that allows you to be inserted inside a game or as a virtual avatar in a chat app in one go. On the other hand, Bony3D andSolidexpress had a sub millimeter 3d scanner that lets you see the smallest detail, in 3D.
3. 3d printing
When you have your 3d model, you can display it, but what’s even more interesting is that you can print it out again with a different size, material or slightly modified shape. 3d printers are getting cheaper year after year and at Laval some of them were actually affordable. I mean, like a cd recorder was 15 years ago. You can guess where it’s going.
4. Kinect, thousands of them
The Kinect is really the device any stand need to have. It’s everywhere. Most of the time used for its orginal purpose, to detect body movements and control something. This something means a character, a robotic arm, a 3d object or a car. The non-standard usages were harder to guess, like this korean metal bump controlled by the movement of a candy wrap in yellow tape and used to turn around little cubes. (see the bonus section)
5.Mind controlled computers
There was a guy showcasing a compact device that reads brainwaves and translates them into actions. In this case it was to move in a first person game. How is this related to augmented reality ? Simple, when you’ll get those fancy ar glasses, you’ll need a way to control how to use them. If you want to know what object you are looking at, you could simply look at it and think “define”. The software would translate your thought into action an open the wikipedia page related to the object. To do that you need a compact device that can be mounted on a pair of glasses. It seems we are going in this direction. In no time we will be able to look at someone and think “friend”, “follow” or “block”.
Bonus : The crazy Asian stuff.
Like in any other tech conference, you need some japanese and korean to show things that are still to far edged for us. This time my preference go to the MoleBot. But the “model and draw” display was quite awesome too. It’s a surface that fell like sand, you can harden or soften it to create shapes. Then you select a color and draw something on the 3d surface you’ve just created. Another cool stuff was an haptic gantlet that let you feel the texture of virtual object. And with a little scanner they are even able to record a surface and assign it to a 3d virtual entity. Then when you touch it you get the feeling that it’s the real surface. Awesome.