Ever heard of the Windows Surface? The huge touch sceen device, thats basically an overgrown Ipad or a tablet. If you still havent figured that out, maybe the below pictures and video would refresh your mind.
Wikipedia defines The Microsoft Surface as ” a commercial computing platform that enables people to use touch and real world objects to share digital content at the same time. The Surface platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI) for users.”
Now the Microsoft Surface works on Multi touch functionality which uses the LED based matrix surface to sense the motions across the screens. Sounds very complicated, undecipherable and Costly. Right?
As of Dec 2011, the surface cost around $7600+taxes so roughly around $8000, ie, 8000 X 55 = Rs. 4,40,000 (INR). Thats almost 11 times costlier than an Apple Ipad or almost 20 – 30 times costlier than an average Android Tablet.
This device for us geeks, ie, if we have such ready cash, is an awesome way flaunt our geeky skills. But from a normal consumers perspective, how would this help?
1) It can be used in schools and colleges to make study material more vibrant and interactive. 2) Engineering and Marketting firms to draw and present ideas and innovations
and the list can go on.
But is it affordable for an individual or an organisation to buy such a device, which to be honest has a very dissapointing depreciation cost.
But what if i told you, you can make a similar surface at home with just the following set of materials,
1) a webcam 2) Glass panel 3) White Paper 4) a card board box 5) Small projector
There’s also a community dedicated to providing open source multitouch software’s for such DIY devices. You can follow the below mentioned links to make the device at home!
The above link might get too technical so let me explain it in a lay mans term.
This device uses the concept called Optical Based Touch Surfaces, where in a camera is used to capture and process the captured images. How the camera does that is by capturing the disruption in light created by our fingers when the move over the glass panel. The camera transmits these signals to the computer. The NUI code open source code works as a way to process and interpret these disruptions and use them as a way to control the computer. Once the gestures have been deciphered, the software transmits the information using the projector back on the glass screen, giving an impression of a touch screen! Neat right!?
I have personally tried to make a multi touch pad, which is basically functions on the same fundamentals stated above, but does not have a projector to project the gestures. Instead the output is on your laptop screen.