Mouse-Picking Collada Models with THREE.js

Posted by filed under Experiments in Web, How to.

UPDATE: This post was written for three.js revision 49. An updated post for newer revisions of three.js is here: Mouse-Picking Collada Models with three.js, Part II.

Finding a Collada model that has been “clicked on” in a scene seems to be a common issue, and I’m getting quite some emails asking me about details. So here’s a how-to with annotated code.

The whole “finding an object” thing requires ray casting. When the user clicks anywhere on the screen, we’ll project the event coordinates into the 3D space so that we have a virtual “view-path” from the center of our view into the direction where the click took place – like our own eye does. We then follow that line until we find an intersection with an object in the scene. That line is the ray we are “casting”. OK, let’s go!

There is a demo showing this here: http://jensarps.github.com/webgl_experiments/collada_picking_ray.html

The annotated source code is available here: https://github.com/jensarps/webgl_experiments/blob/master/collada_picking_ray.html. I’ll be following along this code in this post.

For this to work, you need a ray caster that is able to detect Collada models; I extensively wrote about this before, so I just recommend you use the ReusableRay class.

Initially, we setup some vars we will later need:

The first thing we do is to record the mouse event. We do not react to it right now, because we don’t want to do anything outside of the render loop, so we keep control about what happens when. So let’s just store the coordinates and set a flag that we can later look up:

Next, in the render loop, we check if a click has happened. If so, we start the whole ray casting thingy. To define the ray, we need two vectors: One representing the start point of the ray, and the other one representing the direction. The first is easy, it’s the camera position. The second is more interesting. We start with translating the mouse coordinates into something that’s independent of screen size, and assigning them to the direction vector:

Now, there’s a little bit of math magic happening. Currently, we have the right direction if the camera’s position and view direction wouldn’t have changed. So we need to modify the vector to take these into account; I can’t really explain why this works, you need to trust me on this one:

Ok, now we’ve got a vector that describes the direction correctly, but if you inspect it, you’ll find that it contains some crazy numbers. We need to make sure it contains only numbers ranging from -1 to 1 before passing it on to the ray caster class and firing off the ray:

That’s it! We can now ask the ray class for intersections, and it will report back all meshes, particles and objects that have meshes as first-level children. Intersections are ordered by distance, so in this case, we only need the first one we get. Each intersection has three properties: point, face and object. point contains a vector describing the point in space where the ray exactly intersected the object, face contains the hit face, and object the original object that has been hit — it’s exactly the object we have been adding to the scene earlier with scene.add(/* ... */).

Done! If there’s anything unclear, don’t hesitate and let me know. Thanks!

[Comments are automatically closed after 30 days.]

Comment via Google+