Posts Categorized: Experiments in Web
- Or: Navigate a Website on a Computer Screen in a Website on a Computer Screen -
Jerome Etienne‘s post on HTML Elements in a WebGL surrounding really caught my interest. What I wanted to do was slightly different, though: I wanted a fully accessible website displayed on the screen of a computer 3D model (creating a virtual 3D office in a browser with an interactive computer, phone, stereo and such is an old dream of mine).
Jerome’s post was a good start, and he created tQuery plugins to do it. After reading the source and understanding the concept, it was a pretty easy and straightforward process, except for two issues I ran into. And here’s the result: browse this website in a 3D environment (requires Chrome and features audio, so grab some headphones and turn up the volume).
Just recently I learned about the Web Speech API which it’s already available in Chrome 25. It takes input from the computer’s microphone, does a speech recognition and returns you the results – without needing you to do anything. You just start the service, say “Hello” and get a result returned that contains the string “hello”. I immediately got nerd sniped and decided I needed to add speech recognition to decoupled-input to be able to issue voice commands in a game, like “Arm cannon”, “Fire missile” or “Activate autopilot”. There’s an example page over here where you can see it in action. Just press “V” to activate recognition and say one of “Full speed”, “Slow” or “Stop” to control the car’s speed; you get a green confirmation text when the command has been recognized. While this is seriously awesome, it also has some cons. Let’s go into some details.
The Ascent project had a bunch of upgrades these days, including better loading logic, a radar, pointer lock support and others. But the most important one certainly was decoupled input. I found it important enough to extract and polish it and create a separate GitHub repo for it. So, let me introduce you to decoupled-input!
UPDATE: This post was written for three.js revision 49. An updated post for newer revisions of three.js is here: Mouse-Picking Collada Models with three.js, Part II.
Finding a Collada model that has been “clicked on” in a scene seems to be a common issue, and I’m getting quite some emails asking me about details. So here’s a how-to with annotated code.
The whole “finding an object” thing requires ray casting. When the user clicks anywhere on the screen, we’ll project the event coordinates into the 3D space so that we have a virtual “view-path” from the center of our view into the direction where the click took place – like our own eye does. We then follow that line until we find an intersection with an object in the scene. That line is the ray we are “casting”. OK, let’s go!
read more »
As I mentioned before, the THREE.js Ray class is not very optimized regarding memory usage. Of course, you should avoid expensive operations like ray casting, but it seems to be the cheapest way to detect if a given point is inside of a mesh. So you will eventually do a lot of ray casting in your render loop.
This is the case for Ascent, as I’m implementing shooting at things there. The whole “shooting at things” implementation can be optimized a lot, but that’s something for another post; for now, we just want a less memory-eating ray caster class.
read more »
One of the main missing things in Ascent is “Being able to shoot at things”. To solve this, the first question is “What to shoot?”. Ok, there are rockets. But what about the almost-unlimited-basic-weaponry? Many space games feature laser guns. But, lasers, if implemented they way they’d actually look like and work, are boring. Just long lines, going on forever; not the fancy thing you know from Star Wars. You could also implement them to work like railguns – still, boring. But I remember that one of the coolest things of WWI flight simulators was firing bullets at enemies with the on-board cannon. And I’m sure one can find a satisfactory explanation why firing bullets in space is a reasonable thing to do The only thing that feels odd to me is that those bullets will keep on traveling forever until they hit anything…
During the last month, I didn’t have much time to work on Ascent, mainly because of an awesome three week vacation in Italy. But there’s one tiny update I made that I find pretty important: moving from particles as scene “background” to a skybox.
Quite a while ago, I started RavenJS. It was awesome fun. Seeing it grow, and in the end, being able to walk the landscape, was an amazing experience. It triggered sweet memories from the past, and also showed what was possible today. But the whole project sadly has many downsides, the biggest being the fact that I will never be able to put it online somewhere. I still believe that it would have been an awesome opportunity; if I were the original publishers, I would have jumped at it, created a freemium model around it, and spent the rest of my days wondering where to put all the money. Seriously, an online MMORPG, without any plugin or executable to download, just open your browser, enter your (OpenID-) credentials and play? With decent 3D graphics? Plus, that all in a famous setting? I can hardly imagine the amount of success of such a thing would have. (Think about it a little more – virtually no barrier to play, payment and id providers all already in place, pushing updates without downloads, and so on…)
Well, however, I don’t want to get carried away, I guess you get what I mean. So, I decided to start something else. Something that would be easier to realize for a non-3D-guy like me. And something using three.js, as I’ve always been using GLGE for my WebGL experiments (RavenJS was built using GLGE, too). And, most important, something that could be open source and live on GitHub. Something that everybody could play, download and fork. Something people could contribute to, modify, extend, make better.
read more »
Unfortunately, three.js’ Ray class currently doesn’t detect intersections with imported Collada objects. Unfortunately, because I heavily rely on imported models and I’m too lazy to do the detection manually.
But, the good new is, the collada objects carry all information needed for the ray caster to work properly; you just need to do some manual tweaking.