Hacking the Kinect (Technology in Action)
Jeff Kramer, Florian Echtler
Format: PDF / Kindle (mobi) / ePub
Hacking the Kinect is the technogeek’s guide to developing software and creating projects involving the groundbreaking volumetric sensor known as the Microsoft Kinect. Microsoft’s release of the Kinect in the fall of 2010 startled the technology world by providing a low-cost sensor that can detect and track body movement in three-dimensional space. The Kinect set new records for the fastest-selling gadget of all time. It has been adopted worldwide by hobbyists, robotics enthusiasts, artists, and even some entrepreneurs hoping to build business around the technology.
Hacking the Kinect introduces you to programming for the Kinect. You’ll learn to set up a software environment, stream data from the Kinect, and write code to interpret that data. The progression of hands-on projects in the book leads you even deeper into an understanding of how the device functions and how you can apply it to create fun and educational projects. Who knows? You might even come up with a business idea.
- Provides an excellent source of fun and educational projects for a tech-savvy parent to pursue with a son or daughter
- Leads you progressively from making your very first connection to the Kinect through mastery of its full feature set
- Shows how to interpret the Kinect data stream in order to drive your own software and hardware applications, including robotics applications
camera 84 CHAPTER 4 COMPUTER VISION Combining Frame Differencing with Background Subtraction One issue you may notice with frame differencing is that it finds differences in a moving object’s position from one frame to the next. See Figure 4-13 for an example. Figure 4-13. Image of a double image with hand Technically, the double image in Figure 4-13 is correct, as motion occurred where the hand was in the previous frame (it left that area) and where it is in the current frame (it entered
in the first (bgcloud). Next, we push back the points that are referred to by those indices into our new cloud (fgcloud) and display it. This results in the three output clouds shown in Figures 6-5 (background), 6-6 (full scene), and 6-7 (foreground). 117 CHAPTER 6 VOXELIZATION Figure 6-5. The background of the scene, precaptured 118 CHAPTER 6 VOXELIZATION Figure 6-6. The full sceneas a point cloud 119 CHAPTER 6 VOXELIZATION Figure 6-7. Foreground, after subtraction Clustering
dots is definitely a big advancement because it allows for the use of a higher powered laser. Figure 2-2. Structured light pattern from the IR emitter The depth sensing works on a principle of structured light. There’s a known pseudorandom pattern of dots being pushed out from the camera. These dots are recorded by the IR camera and then compared to the known pattern. Any disturbances are known to be variations in the surface and can be detected as closer or further away. This approach creates
glutKeyboardFunc(do_glutKeyboard); glutMotionFunc(do_glutMotion); glutMouseFunc(do_glutMouse); //RGB texture for opengl glGenTextures(1, &gl_colormap_tex); glBindTexture(GL_TEXTURE_1D, gl_colormap_tex); glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); create_color_map(); glTexImage1D(GL_TEXTURE_1D, 0, 3, color_map_size, 0, GL_RGB, GL_UNSIGNED_BYTE, color_map); rgb_tex_width = 640; rgb_tex_height =
registration. A common use of 3-D information is robot navigation. The structure of the room can be used to plan a route for a robot that avoids obstacles. To plan the route, we need to know the structure of the environment and the robot’s location. If we know the environment, we can easily locate the robot. Likewise, if we know the position and trajectory of the robot, we can map the environment’s structure. However, estimating both simultaneously is a complicated problem. In robotics, this is