That means, I want a build to be dynamic during build time, not during runtime (one could also call this a configurable build). If the resulting build should also be an AMD module, or you want to keep the single modules as modules, this can be easily achieved using the AMD feature plugin, which allows to configure a build during runtime as well as during build time. But what I wanted to have as a result this time, was a single UMD wrapped module with no define/require calls inside of it, so that a user can also just reference the build from a script tag (i.e. remove the dependency on an AMD loader as Require.js).
- Or: Navigate a Website on a Computer Screen in a Website on a Computer Screen -
Jerome Etienne‘s post on HTML Elements in a WebGL surrounding really caught my interest. What I wanted to do was slightly different, though: I wanted a fully accessible website displayed on the screen of a computer 3D model (creating a virtual 3D office in a browser with an interactive computer, phone, stereo and such is an old dream of mine).
Jerome’s post was a good start, and he created tQuery plugins to do it. After reading the source and understanding the concept, it was a pretty easy and straightforward process, except for two issues I ran into. And here’s the result: browse this website in a 3D environment (requires Chrome and features audio, so grab some headphones and turn up the volume).
It’s been some time since dojox/storage, and a lot of things changed since then. Most notably: Browsers now have IndexedDB, and Dojo now has the dojo/store API that widgets can directly work with.
The dojo/store API was built with IndexedDB and offline in mind, has a nice API and allows asynchronism, meaning that it can return Promises instead of values. The only thing missing was a persistence layer that would take data via that API and store it in the user’s browser, e.g. for offline availability.
And that’s what Storehouse is: A persistent object store implementing the dojo/store API.
Just recently I learned about the Web Speech API which it’s already available in Chrome 25. It takes input from the computer’s microphone, does a speech recognition and returns you the results – without needing you to do anything. You just start the service, say “Hello” and get a result returned that contains the string “hello”. I immediately got nerd sniped and decided I needed to add speech recognition to decoupled-input to be able to issue voice commands in a game, like “Arm cannon”, “Fire missile” or “Activate autopilot”. There’s an example page over here where you can see it in action. Just press “V” to activate recognition and say one of “Full speed”, “Slow” or “Stop” to control the car’s speed; you get a green confirmation text when the command has been recognized. While this is seriously awesome, it also has some cons. Let’s go into some details.
Note: This is a tutorial about working with IDBWrapper, a wrapper for the IndexedDB client side storage. Creating indexes and running queries only works with the new version of IDBWrapper, so if already have IDBWrapper, make sure you fetch a new version from GitHub.
All examples in this tutorial follow along the “Basic Index Example”, which uses IDBWrapper to store customer data such as name and age. You can find in the examples folder, and you can (and should) also open it in your browser, as it has a “query” section that allows you to set query options by using inputs and dropdowns to immediately see the results of your settings.
While the last part of the tutorial covered the basic CRUD methods get/getAll/put/delete, this part is about the real thing: running queries against the store.
To do so, we need to get familiar with two things: Creating indexes, and creating keyRanges.
The Ascent project had a bunch of upgrades these days, including better loading logic, a radar, pointer lock support and others. But the most important one certainly was decoupled input. I found it important enough to extract and polish it and create a separate GitHub repo for it. So, let me introduce you to decoupled-input!
UPDATE: This post was written for three.js revision 49. An updated post for newer revisions of three.js is here: Mouse-Picking Collada Models with three.js, Part II.
Finding a Collada model that has been “clicked on” in a scene seems to be a common issue, and I’m getting quite some emails asking me about details. So here’s a how-to with annotated code.
The whole “finding an object” thing requires ray casting. When the user clicks anywhere on the screen, we’ll project the event coordinates into the 3D space so that we have a virtual “view-path” from the center of our view into the direction where the click took place – like our own eye does. We then follow that line until we find an intersection with an object in the scene. That line is the ray we are “casting”. OK, let’s go!
read more »
As I mentioned before, the THREE.js Ray class is not very optimized regarding memory usage. Of course, you should avoid expensive operations like ray casting, but it seems to be the cheapest way to detect if a given point is inside of a mesh. So you will eventually do a lot of ray casting in your render loop.
This is the case for Ascent, as I’m implementing shooting at things there. The whole “shooting at things” implementation can be optimized a lot, but that’s something for another post; for now, we just want a less memory-eating ray caster class.
read more »