Kinect SDK Changelog

What's new in Kinect SDK 2.0.1408.18002 Public Preview

Aug 22, 2014
  • Windows Store App Support (WinRT)
  • With this release of Kinect for Windows, you are able to begin developing Kinect enabled applications which target the Windows Store. All of the features of the Kinect SDK are now available in Windows Store applications, and we are incredibly excited to see what people create there. Publishing applications to the store will be enabled at the time that we launch the 2.0 release of Kinect for Windows, later this year. All of the Kinect SDK and sensor functionality are available in this API surface, except for Speech. To view the new WindowsPreview managed API, see Windows Store App (WinRT) Managed Reference. For more information on adding Kinect support to your Visual Studio project, see Creating a Windows Store App that Uses Kinect for Windows SDK.
  • Unity Support:
  • For the first time, the Kinect API set is available in Unity Pro, via a Unity Package. We are excited to be able to offer the platform to our developers. Core APIs for Kinect for Windows are available in this API surface.
  • Net APIs:
  • The Managed API set should feel familiar to developers who worked with our managed APIs in the past. We know this is one of the fastest development environments available, and that many development shops have an existing investment in this space. All of the Kinect SDK and sensor functionality are available in this API surface.
  • Native APIs:
  • Many Kinect applications require the full power and speed that writing native C++ code requires. We are excited to share this iteration of the native APIs for Kinect. The form and structure of the APIs is identical to the Managed API set, but allow a developer to access the full speed of C++. These APIs are a significant divergence from the v1.x native APIs, and should be significantly easier to use. All of the Kinect SDK and sensor functionality are available in this API surface.
  • Face APIs:
  • Extended massively from V1, the Face APIs provide a wide variety of functionality to enable rich face experiences and scenarios. Within the Face APIs, developers will be able to detect faces in view of the sensor, align them to 5 unique facial identifiers, and track orientation in real-time. With HD Face, the technology provides 94 unique “shape units” per face, to create meshes with a very rich likeness to the user. The meshes can be tracked in real-time to show rich expressiveness and live tracking of the user’s facial movements and expressions.
  • Kinect Studio:
  • Kinect Studio has had a major rewrite since the V1 days, in order to handle the new sensor, and to provide users with more customization and control. The new user-interface offers flexibility in the layout of various workspaces and customization of the different views. It is now possible e.g. to compare two 2D or 3D views side-by-side or to create a custom layout to meet your needs. The separation of the monitoring, recording and playback streams exposes additional functionality such as file- and stream-level metadata. The timeline features: in- and out-points to control what portion of the playback to play; pause-points that let you set multiple points at which to suspend a playback; markers, that let you attach meta-data to various points in time. This preview also exposes playback looping and additional 2D/3D visualization settings. There is still some 'placeholder' artwork here and there, but the tooltips should guide you along.
  • Release Notes:
  • Recording the color stream is currently unsupported.
  • It is possible to find the 'record' button disabled when connected to the Kinect service. Restarting the Kinect Monitor service usually addresses this issue. From and Administrator Command Prompt, net stop "Kinect Monitor" followed by net start "Kinect Monitor" usually does the trick.
  • Gesture Builder:
  • Introducing Gesture Builder, a gesture detector builder that uses machine-learning and body-frame data to 'define' a gesture. Multiple body-data clips are marked (aka 'tagged') with metadata about the gesture which is then used by a machine-learning trainer during the build step to extract a gesture definition from the body-data clips. The gesture definition can subsequently be used by the gesture detection runtime - called by your application - to detect one or more gestures. While using machine-learning for gesture detection is not for the faint of heart, it offers a path to rapid prototyping. Using vgbview, you can benchmark your gesture definitions without requiring that you write any code.
  • Samples:
  • There is an extensive set of samples available via the SDKBrowser, across a range of languages and frameworks:
  • Audio Basics-(D2D, WPF)
  • Audio Basics (IStream) -D2D
  • Audio Capture-Console (Raw)
  • Body Basics-(D2D, HTML, WPF, XAML)
  • Color Basics-(D2D, HTML, WPF, XAML)
  • Controls Basics-(DX, WPF, XAML)
  • Coordinate Mapping Basics (D2D, HTML, WPF, XAML)
  • Depth Basics (D2D, HTML, WPF, XAML)
  • Infrared Basics (D2D, HTML, WPF, XAML)
  • Speech Basics (D2D, WPF)
  • Note: In order to run the speech samples, you need to install the Speech Runtime. In order to build the speech samples, you need to install the Speech SDK.
  • Kinect for Windows v2 Hand Pointer Gestures Support:
  • If you’d like to enable your applications to be controls via hand pointer gestures, Kinect for Windows v2 has improved support. See ControlsBasics-XAML, ControlsBasics-WPF and ControlsBasics-DX for examples of how to hand pointer gesture enable your applications. This is an evolution of the KinectRegion/KinectUserViewer support that we provided in Kinect for Windows v1.7 and later. KinectRegion and KinectUserViewer are available for XAML and WPF applications. The DirectX support is built on top of a lower level Toolkit Input component.
  • Audio:
  • The Kinect sensor and SDK provide a best in class array microphone and advanced signal processing to create a virtual, software based microphone which is highly directional, and which can understand the direction sounds are coming from. In addition, this provides a very high quality input for Speech recognition.

New in Kinect SDK 1.8.0.595 (Sep 19, 2013)

  • Kinect Background Removal:
  • The new Background Removal API provides "green screen" capabilities for a single person. The user can be specified using skeleton ID. The BackgroundRemovedColorStream API uses various image processing techniques to improve the stability and accuracy of the player mask originally contained in each depth frame. The stream can be configured to select any single player as the foreground and remove the remaining color pixels from the scene.
  • Webserver for Kinect Data Streams:
  • New web components and samples give HTML5 applications access to Kinect data for interactions and visualization. This is intended to allow HTML5 applications running in a browser to connect to the sensor through a server running on the local machine. Use this to create kiosk applications on dedicated machines. The webserver component is a template that can be used as-is or modified as needed.
  • Color Capture and Camera Pose Finder for Kinect Fusion:
  • Kinect Fusion provides 3D object scanning and model creation using a Kinect for Windows sensor. With this release, users can scan a scene with the Kinect camera (now optionally also capturing low-resolution color) and simultaneously see, and interact with, a detailed 3D model of the scene. There is also support for recovering the camera pose once tracking has been lost, without having to find the very last tracked position or reset the whole reconstruction, by moving the camera close to one of the original camera positions during initial reconstruction. We also added an API to both the original depth reconstruction and new color reconstruction interfaces to match the "ExportVolumeBlock" and import a volume block. This is restricted to importing at the resolution of the created volume.
  • Introducing New Samples:
  • Adaptive UI-WPF: This new sample demonstrates the basics of adaptive UI that is displayed on screen in the appropriate location and size based on the user's height, distance from the screen, and field of view. The sample provides settings for interaction zone boundaries, tracks users and transitions as they move between far range and tactile range.
  • Webserver Basics-WPF: This sample demonstrates how to use the Microsoft.Samples.Kinect.Webserver component to serve Kinect data on a localhost port. The component and sample require .NET 4.5 (Visual Studio 2012) and Windows 8 (or later).
  • Background Removal Basics-D2D, Background Removal Basics-WPF: Demonstrates how to use the KinectBackgroundRemoval API. This is an improved version of the Coordinate Mapping sample (previously named Green Screen sample in 1.7).
  • Kinect Fusion Explorer Multi Static Cameras-WPF: This demonstrates having multiple static Kinect cameras integrate into the same reconstruction volume, given user-defined transformations for each camera. A new 3rd person view and basic WPF graphics are also enabled to provide way for users to visually explore a reconstruction scene during setup and capture.
  • Kinect Fusion Color Basics-D2D, Kinect Fusion Color Basics-WPF: Demonstrates the basics of Kinect Fusion for 3D reconstruction, now including low-resolution color capture.
  • Kinect Fusion Head Scanning-WPF: Demonstrates how to leverage a combination of Kinect Fusion and Face Tracking to scan high resolution models of faces and heads.
  • Updated Samples
  • Kinect Fusion Explorer-D2D, Kinect Fusion Explorer-WPF: Demonstrates additional features of Kinect Fusion for 3D reconstruction, now including low-resolution color capture. Please review the documentation for the hardware and software requirements. (Note: Suitable DirectX11 graphics card required for real-time reconstruction). The Explorer samples have been updated to always create a volume with the option of capturing color, hence GPU memory requirements have doubled compared to the v1.7 Explorer samples for the same voxel resolutions. Kinect Fusion Explorer-D2D also now integrates the Camera Pose Finder for increased robustness to failed tracking.
  • Coordinate Mapping Basics-WPF, Coordinate Mapping Basics-D2D: These samples were previously named Green Screen in 1.7, renamed to Coordinate Mapping Basics in 1.8.

New in Kinect SDK 1.7.0.529 (Mar 19, 2013)

  • Introducing new Kinect Interactions:
  • Press for Selection. This provides, along with the new KinectInteraction Controls, improved selection capability and faster interactions. If you’re familiar with previous Kinect for Windows interaction capabilities, this replaces the hover select concept.
  • Grip and Move for Scrolling. This provides, along with the new KinectInteraction Controls, 1-to-1 manipulation for more precise scrolling, as well as large fast scrolls with a fling motion. If you’re familiar with previous Kinect for Windows interaction capabilities, this replaces the hover scroll model.
  • New interactions work best with the following setup:
  • User stands 1.5 - 2.0 meters away from the sensor
  • Sensor mounted directly above or below the screen showing the application, and centered
  • Screen size < 46 inches
  • Avoid extreme tilt angles
  • As always, avoid lots of natural light and reflective materials for more reliable tracking
  • Engagement Model Enhancements:
  • The Engagement model determines which user is currently interacting with the Kinect-enabled application. This has been greatly enhanced to provide more natural interaction when a user starts interacting with the application, and particularly when the sensor detects multiple people. Developers can also now override the supplied engagement model as desired.
  • APIs, Samples, and DLL Details:
  • A set of WPF interactive controls are provided to make it easy to incorporate these interactions into your applications.
  • Two samples use these controls: ControlsBasics-WPF and InteractionGallery-WPF. The controls can also be installed in source form via Toolkit Browser -> “Components” -> Microsoft.Kinect.Toolkit.Controls.
  • InteractionGallery - WPF uses the new KinectInteraction Controls in a customized app experience that demonstrates examples of navigation, engagement, article reading, picture viewing, video playback, and panning with grip. It was designed for 1920x1080 resolution screens in landscape layout. For those building applications with UI technologies other than WPF, the lower level InteractionStream APIs (native or managed) are available to build on top of. Native DLLs for InteractionStream are Kinect_Interaction170_32.dll and Kinect_Interaction170_64.dll under %KINECT_TOOLKIT_DIR%\Redist. Managed DLL for InteractionStream is Microsoft.Kinect.Toolkit.Interaction.dll found in %KINECT_TOOLKIT_DIR%\Assemblies.
  • There is no sample of InteractionStream API usage, however, Microsoft.Kinect.Toolkit.Controls source code (see info about controls samples above) is available and is a great example of using InteractionStream.
  • Kinect Fusion:
  • KinectFusion provides 3D object scanning and model creation using a Kinect for Windows sensor. The user can paint a scene with the Kinect camera and simultaneously see, and interact with, a detailed 3D model of the scene. Kinect Fusion can be run at interactive rates on supported GPUs, and can run at non-interactive rates on a variety of hardware. Running at non-interactive rates may allow larger volume reconstructions.
  • Kinect Fusion Samples:
  • Kinect Fusion Basics - WPF, Kinect Fusion Basics - D2D: Demonstrates basic use of the Kinect Fusion APIs for 3D reconstruction.
  • Kinect Fusion Explorer - WPF, Kinect Fusion Explorer - D2D: Demonstrates advanced 3D reconstruction features of Kinect Fusion, allowing adjustment of many reconstruction parameters, and export of reconstructed meshes.
  • Kinect Fusion Tech Specs:
  • Kinect Fusion can process data either on a DirectX 11 compatible GPU with C++ AMP, or on the CPU, by setting the reconstruction processor type during reconstruction volume creation. The CPU processor is best suited to offline processing as only modern DirectX 11 GPUs will enable real-time and interactive frame rates during reconstruction.
  • Kinect Sensor Chooser - native:
  • The Kinect Sensor Chooser is a native component that allows simplified management of the Kinect Sensor lifetime, and an enhanced user experience when dealing with missing sensors, unpowered sensors, or sensors that get unplugged while an application is running. It provides similar capabilities to the KinectSensorChooser in the Microsoft.Kinect.Toolkit component. A NuiSensorChooserUI control is also provided for use in native applications. It provides a user experience similar to the managed KinectSensorChooserUI.
  • Introducing New Samples:
  • Controls Basics - WPF: Demonstrates the new KinectInteraction Controls, including hands-free button pressing and scrolling through large lists. This replaces the Basic Interactions sample from previous releases.
  • Interaction Gallery - WPF: Demonstrates basic interactions using the new KinectInteraction Controls.
  • KinectBridge with MATLAB Basics - D2D: Demonstrates how to do image processing with the Kinect sensor using MATLAB API.
  • KinectBridge with OpenCV Basics - D2D: Demonstrates how to do image processing with the Kinect sensor using OpenCV API.
  • Kinect Explorer - D2D: Demonstrates how to use the Kinect's ColorImageStream, DepthImageStream, SkeletonStream, and AudioSource with C++ and Direct2D. This replaces the SkeletalViewer C++ sample.
  • Kinect Fusion Basics - WPF, Kinect Fusion Basics - D2D: Demonstrates basic use of the Kinect Fusion APIs for 3D reconstruction.
  • Kinect Fusion Explorer - WPF, Kinect Fusion Explorer - D2D: Demonstrates advanced 3D reconstruction features of Kinect Fusion, allowing adjustment of many reconstruction parameters, and export of reconstructed meshes.

New in Kinect SDK 1.6.0.505 (Oct 10, 2012)

  • The Kinect sensor’s infrared stream is now exposed as a new color image format. You can use the infrared stream in many scenarios, such as:
  • Calibrating other color cameras to the Kinect’s depth sensor
  • Capturing grayscale images in low-light situations
  • Two infrared samples have been added to the toolkit, and you can also try out infrared in KinectExplorer. This provides developers with a wider spectrum of testing scenarios.
  • Note that the sensor is not capable of capturing infrared streams and color streams simultaneously. You can, however, capture infrared and depth streams simultaneously.
  • Extended depth data:
  • CopyDepthImagePixelData() now provides details beyond 4 meters; please note that the quality of data degrades with distance. In addition to extended depth data, usability of the Depth Data API has been improved (bit masking is no longer required). This means that applications will be able to read data beyond 4 meters when needed.
  • Color camera setting APIs:
  • The color camera settings can now be optimized to your environment.
  • You can now fine-tune the white balance, contrast, hue, saturation, and other settings—yielding a better color image for each individual environment.
  • To see the full list of settings that can be optimized, launch Kinect Explorer from the developer toolkit browser and review the exposure and color controls.
  • Raw Bayer:
  • The new raw Bayer color image format enables you to do your own Bayer to RGB conversions on central processing units (CPUs) or graphics processing units (GPUs). This enables developers to choose a higher quality conversion from Bayer to RGB than our SDK provides by default.
  • Accelerometer data APIs:
  • Data from the sensor's accelerometer is now exposed in the API. This enables detection of the sensor's orientation.
  • German language pack for speech recognition:
  • The SDK ships with a German speech recognition language pack that has been optimized for the sensor's microphone array, allowing developers to provide voice-enabled applications for German-speaking users.
  • New coordinate space conversion APIs:
  • There are several new APIs to convert data between coordinate spaces: color, depth, and skeleton. There are two sets of APIs: one for converting individual pixels and the other for converting an entire image frame. Beyond improving ease of use, this supports additional coordinate mapping functionality not previously available to developers.

New in Kinect SDK 1.5.2.331 (May 21, 2012)

  • The Kinect for Windows v1.5 SDK, driver, and runtime are 100% compatible with Kinect for Windows v1.0 application binaries.
  • Kinect for Windows Developer Toolkit:
  • As of this release, the SDK has been divided into a core SDK and a developer toolkit. These are two separate installs.
  • Samples
  • All Samples have been moved into the toolkit
  • We’ve continued significant sample investments in v1.5. There are many new samples in both C++ or C#. In addition, we’ve included a “Basics” series of samples with language coverage in C++, C# and Visual Basic. To explore the list of new and updated samples, please launch the Developer Toolkit Browser and explore.
  • Components
  • We’ve taken KinectSensorChooser, formerly part of the WpfViewers and split the logic and UI into two different classes: KinectSensorChooser and KinectSensorChooserUI in Microsoft.Kinect.Toolkit.dll.
  • KinectSensorChooser could be used in non-WPF scenarios as it is logic only, no UI.
  • KinectSensorChooserUI (used with a KinectSensorChooser instance) is a WPF experience that has undergone significant improvements in user experience. ShapeGame has been migrated to use it.
  • Kinect Studio is a new tool which allows you to record and playback Kinect data to aid in development. For example: A developer writing a Kinect for Windows application for use in a shopping center, can record clips of users in the target environment and then replay those clips at a later time for development and test purposes.
  • Notes on Kinect Studio:
  • Kinect Studio must be used in conjunction with a Kinect for Windows application. Start the application, then start Kinect Studio, and you can then playback and record clips. When you play a clip back, it will be fed into the application as if it was live Kinect data.
  • Using Kinect Studio puts additional load on your machine which may result in a drop in frame rate. Using a faster CPU and memory will improve performance.
  • Your temporary file location (set via Tool/Options) should not be a network location.
  • Skeletal tracking in Near Range available:
  • It is now possible to receive Skeletal Tracking data when the Kinect camera is in the Near Range Setting.
  • This setting is disabled by default to ensure backwards compatibility with 1.0 application
  • Enable it through the SkeletonStream.EnableTrackingInNearRange property (managed code), or by including the NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE flag when calling NuiSkeletonTrackingEnable (native code).
  • This works with both SkeletonTrackingMode set to Default or Seated.
  • We suggest to use the Seated mode when using Near Range, since in most scenarios the player body will not be entirely visible. Ensure the torso and the head of the players are visible for locking.
  • Seated skeletal tracking is now available:
  • This mode tracks a 10-joint head/shoulders/arms skeleton, ignoring the leg and hip joints.
  • It is not restricted to seated positions; it also tracks head/shoulders/arms when you are standing.
  • Seated mode has been added to Skeletal Viewer (C++) and Kinect Explorer (C#). To try out the mode to understand its tracking ability, launch one of those applications and change the tracking setting from Default to Seated.
  • For information on enabling Seated Mode, see the docs under Programming Guide -> Natural User Interface
  • Seated mode skeletal tracking has higher performance requirements than Default mode, especially when tracking two players. You may notice reduced frame rate depending on the configuration of your system.
  • Runtime Improvements:
  • KinectSensor.MapDepthFrameToColorFrame performance improvements: this the performance of this operation has been significantly improved with an average speed increase of 5x
  • Depth and color frames will now be kept in sync with each other. The Kinect for Windows runtime will continuously monitor the depth and color streams and ensure that there is minimal drift between them.
  • In managed code you will see that the frames returned from the KinectSensor.AllFramesReady event will have been captured at nearly the same time and will have timestamps that verify this.
  • RGB Image quality:
  • The RGB camera stream quality has been improved for the RGB 640x480 @30fps and YUV 640x480 @15fps video modes.
  • The image quality is now sharper and more color-accurate in high and low lighting conditions.
  • Joint Orientation:
  • Kinect for Windows runtime provides Joint Orientation information for the skeletons tracked by the ST pipeline.
  • The Joint Orientation is provided in two forms:
  • A Hierarchical Rotation based on a bone relationship defined on the ST joint structure
  • An Absolute Orientation in Kinect camera coordinates.
  • The orientation information is provided in form of Quaternions and Rotation Matrices; however, the Quaternion values have a known bug in M2 which will be fixed for v1.5 final. Please use the matrices with M2.
  • This information can be used in Avatar animation scenarios as well as simple pose detection.
  • Note: please ignore the Joint Orientation confidence level, it will be removed in the final release.
  • New supported languages for speech recognition:
  • Acoustic models have been created to allow speech recognition in several additional locales.
  • These are runtime components which are packaged individually and are available here.
  • Additional locales now supported:
  • en-AU
  • en-CA
  • en-GB
  • en-IE
  • en-NZ
  • es-ES
  • es-MX
  • fr-CA
  • fr-FR
  • it-IT
  • jp-JP
  • Face Tracking SDK:
  • The Face Tracking component tracks face position, orientation and facial features in real time.
  • A 3D mesh of the tracked face along with eye-brow position and mouth shape is animated in real time.
  • Multiple faces can be tracked simultaneously.
  • Face Tracking components can be used in native C++ code and a managed wrapper is provide for C# and VB projects.
  • Docs improvements:
  • Bug fixes, stability, etc.
  • Other:
  • Documentation improvements
  • Improved documentation content
  • The documentation is now online in MSDN Library. F1 help in Visual Studio will take you to API documentation online.
  • The SDK Documentation CHM file is no longer distributed by setup. Online documentation URL is: here.
  • Bug fixes, stability, etc.

New in Kinect SDK 1.0.3.191 (May 3, 2012)

  • The earlier version had the potential to produce certificate error messages during setup in limited scenarios. The issue was resolved in version 1.0.3.191. The error messages included: “The integrity of this certificate cannot be guaranteed,” “The signature of the certification cannot be verified,” and “This certification has an invalid digital signature.”

New in Kinect SDK 1.0.3.190 (Feb 1, 2012)

  • Support for up to 4 Kinect sensors plugged into the same computer, assuming the computer is powerful enough and they are plugged in to different USB controllers so that there is enough bandwidth available. (As before, skeletal tracking can only be used on one Kinect per process. The developer can choose which Kinect sensor.)
  • Skeletal Tracking:
  • The Kinect for Windows Skeletal Tracking system is now tracking subjects with results equivalent to the Skeletal Tracking library available in the November 2011 Xbox 360 Development Kit.
  • The Near Mode feature is now available. It is only functional on Kinect for Windows Hardware; see the Kinect for Windows Blog post for more information.
  • Robustness improvement including driver stability, runtime and audio fixes.
  • API Updates and Enhancements:
  • Many renaming changes to both the managed and native APIs for consistency and ease of development. Changes include:
  • Consolidation of managed and native runtime components into a minimal set of DLLs
  • Renaming of managed and native APIs to align with product team design guidelines
  • Renaming of headers, libs, and references assemblies
  • Significant managed API improvements:
  • Consolidation of namespaces into Microsoft.Kinect
  • Improvements to DepthData object
  • Skeleton data is now serializable
  • Audio API improvements, including the ability to connect to a specific Kinect on a computer with multiple Kinects
  • Improved error handling
  • Improved initialization APIs, including addition the Initializing state into the Status property and StatusChanged events
  • Set Tracked Skeleton API support is now available in native and managed code. Developers can use this API to lock on to 1 or 2 skeletons, among the possible 6 proposed.
  • Mapping APIs: The mapping APIs on KinectSensor that allow you to map depth pixels to color pixels have been updated for simplicity of usage, and are no longer restricted to 320x240 depth format.
  • The high-res RGB color mode of 1280x1024 has been replaced by the similar 1280x960 mode, because that is the mode supported by the official Kinect for Windows hardware.
  • Frame event improvements. Developers now receive frame events in the same order as Xbox 360, i.e. color then depth then skeleton, followed by an AllFramesReady event when all data frames are available.
  • Managed API Updates:
  • Correct FPS for High Res Mode:
  • ColorImageFormat.RgbResolution1280x960Fps15 to ColorImageFormat.RgbResolution1280x960Fps12
  • Enum Polish:
  • Added Undefined enum value to a few Enums: ColorImageFormat, DepthImageFormat, and KinectStatus
  • Depth Values:
  • DepthImageStream now defaults IsTooFarRangeEnabled to true (and removed the property).
  • Beyond the depth values that are returnable (800-4000 for DepthRange.Default and 400-3000 for DepthRange.Near), we also will return the following values:
  • DepthImageStream.TooNearDepth (for things that we know are less than the DepthImageStream.MinDepth)
  • DepthImageStream.TooFarDepth (for things that we know are more than the DepthImageStream.MaxDepth)
  • DepthImageStream.UnknownDepth (for things that we don’t know.)
  • Serializable Fixes for Skeleton Data:
  • We’ve added the SerializableAttribute on Skeleton, JointCollection, Joint and SkeletonPoint
  • Mapping APIs:
  • Performance improvements to the existing per pixel API.
  • Added a new API for doing full-frame conversions:
  • public void MapDepthFrameToColorFrame(DepthImageFormat depthImageFormat, short[] depthPixelData, ColorImageFormat colorImageFormat, ColorImagePoint[] colorCoordinates);
  • Added KinectSensor.MapSkeletonPointToColor()
  • public ColorImagePoint MapSkeletonPointToColor(SkeletonPoint skeletonPoint, ColorImageFormat colorImageFormat);
  • Misc:
  • Renamed Skeleton.Quality to Skeleton.ClippedEdges
  • Changed return type of SkeletonFrame.FloorClipPlane to Tuple.
  • Removed SkeletonFrame.NormalToGravity property.
  • Audio & Speech:
  • The Kinect SDK now includes the latest Microsoft Speech components (V11 QFE). Our runtime installer chain-installs the appropriate runtime components (32-bit speech runtime for 32-bit Windows, and both 32-bit and 64-bit speech runtimes for 64-bit Windows), plus an updated English Language pack (en-us locale) with improved recognition accuracy.
  • Updated acoustic model that improves the accuracy in the confidence numbers returned by the speech APIs
  • Kinect Speech Acoustic Model has now the same icon and similar description as the rest of the Kinect components
  • Echo cancellation will now recognize the system default speaker and attempt to cancel the noise coming from it automatically, if enabled.
  • Kinect Audio with AEC enabled now works even when no sound is coming from the speakers. Previously, this case caused problems.
  • Audio initialization has changed:
  • C++ code must call NuiInitialize before using the audio stream
  • Managed code must call KinectSensor.Start() before KinectAudioSource.Start()
  • It takes about 4 seconds after initialize is called before audio data begins to be delivered
  • Audio/Speech samples now wait for 4 seconds for Kinect device to be ready before recording audio or recognizing speech.
  • Samples:
  • A sample browser has been added, making it easier to find and view samples. A link to it is installed in the Start menu.
  • ShapeGame and KinectAudioDemo (via a new KinectSensorChooser component) demonstrate how to handle Kinect Status as well as inform users about erroneously trying to use a Kinect for Xbox 360 sensor.
  • The Managed Skeletal Viewer sample has been replaced by Kinect Explorer, which adds displays for audio beam angle and sound source angle/confidence, and provides additional control options for the color modes, depth modes, skeletal tracking options, and motor control. Click on “(click for settings)” at the bottom of the screen for all the bells and whistles.
  • Kinect Explorer (via an improved SkeletonViewer component) displays bones and joints differently, to better illustrate which joints are tracked with high confidence and which are not.
  • KinectAudioDemo no longer saves unrecognized utterances files in temp folder.
  • An example of AEC and Beam Forming usage has been added to the KinectAudioDemo application.
  • Redistributable Kinect for Windows Runtime package:
  • There is a redist package, located in the redist subdirectory of the SDK install location. This redist is an installer exe that an application can include in its setup program, which installs the Kinect for Windows runtime and driver components.

New in Kinect SDK 1.0.0.45 Beta 2 (Nov 4, 2011)

  • Significant improvements to skeletal tracking:
  • Accuracy has been improved overall, resulting in more precise tracking.
  • Skeletal Frame delivery is faster, resulting in reduced latency.
  • Skeletal Tracking is now multi-threaded and can take advantage of multiple CPU cores.
  • When using 2 Kinects, developers can now specify which one is used for skeletal tracking.
  • API support for detecting and managing device status changes, such as device unplugged, device plugged in, power unplugged, etc. Apps can reconnect to the Kinect device after it is plugged in, after the computer returns from suspend, etc. See the Shape Game sample code for the best example.
  • Developers using the audio within WPF no longer need to access the DMO from a separate thread. You can create the KinectAudioSource on the UI thread and simplify your code.
  • The driver, runtime, and SDK work correctly on the Windows 8 Developer Preview for desktop applications.
  • The SDK can be used to build 64-bit applications. Previously, only 32-bit applications could be built.
  • NuiImageBuffer has been replaced by INuiFrameTexture, defined in MSR_NuiImageCamera.h. It is no longer necessary to include the file NuiImageBuffer.h in your project.
  • The SDK install structure and default location have changed. The install location is in the environment variable %KINECTSDK_DIR% which defaults to C:\Program Files\Microsoft SDKs\Kinect\v1.0 Beta2
  • Sample code changes:
  • There are is a new C# sample: KinectAudioDemo.
  • The samples have been updated. Also the C# samples use helpers, KinectWpfViewers, that may be useful in your apps.
  • The samples are now installed to Samples folder of the SDK directory, which defaults to C:\Program Files\Microsoft SDKs\Kinect\v1.0 Beta2\Samples. Unzip the samples file to view the source code. We recommend that you unzip them to somewhere outside of the Program Files directory.
  • Driver and runtime stability and performance improvements, especially for the managed API layer.