I’ve been working on a new concept for an image viewing / searching application. For lack of a better name, it’s called Finder for now. The idea is the following. Say you want to find an image on your system; you know its mainly blue (of some sky), but you don’t remember its name, size, location, or when you got it. How can you go about looking for it? You fire up Finder, let it loose on your system, and ask it to cluster images by color. The end result is a big map of all your images, zoomed out, such that every corner of the image represents a color, and the closer you move from one corner to another, you see the colors converging into a gradient. So, naturally, you’d go to the blue area, use your mouse wheel to start zooming in and out, and “throw” the images around (using a kinetic energy panning approach) until you start finding something that resembles the image you’re looking for. You can then zoom in and out, and pan around, until you find your target image, at which point you can pick it up and use it. Right now, I’ve implemented the kinetic panning area, an LRU multi-layered cache system for the images, and the (huge) image grid widget that will hold those thousands and thousands of images. Some of the major challenges at this point are being able to handle the vast amount of data thrown at the application, scrolling it around, loading / unloading images, etc. The next major hurdle to jump over is going to be zooming in and out pretty fast (a problem that might be solved using mipmaps, but might require OpenGL, something I’m trying to avoid).UPDATE: Video here.