Kinect实现简单的三维重建

  • Post author:
  • Post category:其他


Kinect想必大家已经很熟悉了,最近基于Kinect的创意应用更是呈井喷状态啊!看到很多国外大牛用Kinect做三维重建,其中最著名的要数来自微软研究院的Kinect Fusion了,可以看看下面这个视频

http://v.ku6.com/show/7q2Sa__pa4-rWcAVtB3Xuw…html

,或者

http://v.youku.com/v_show/id_XNDcxOTg3MzUy.html

可惜Kinect Fusion是不开源的,不过PCL实现了一个差不多的开源版本,

http://www.pointclouds.org/

。有兴趣同时电脑配置高的朋友可以研究一下。

最近比较闲,有一点手痒,想自己做一个三维重建,不过肯定不会像Kinect Fusion那么强大,只是自己练练手、玩玩而已。代码在最后有下载。



1. 获取Kinect深度图:

首先我使用微软官方的Kinect SDK来控制Kinect,三维绘图我选用了OpenFrameworks。OpenFrameworks(以后简称OF)是一个开源的公共基础库,将很多常用的库统一到了一起,比如OpenGL,OpenCV,Boost等等,而且有大量的第三方扩展库,使用非常方便。具体可见

http://www.openframeworks.cc/

在一切开始之前,我们需要对OpenGL和三维场景做一些设置:

void testApp::setup(){
	//Do some environment settings.
	ofSetVerticalSync(true);
	ofSetWindowShape(640,480);
	ofBackground(0,0,0);

	//Turn on depth test for OpenGL.
	glEnable(GL_DEPTH_TEST);
	glDepthFunc(GL_LEQUAL);
	glShadeModel(GL_SMOOTH);
	
	//Put a camera in the scene.
	m_camera.setDistance(3);
	m_camera.setNearClip(0.1f);

	//Turn on the light.
	m_light.enable();

	//Allocate memory to store point cloud and normals.
	m_cloud_map.Resize(DEPTH_IMAGE_WIDTH,DEPTH_IMAGE_HEIGHT);
	m_normal_map.Resize(DEPTH_IMAGE_WIDTH,DEPTH_IMAGE_HEIGHT);
	//Initialize Kinect.
	InitNui();
}

OF是使用OpenGL进行绘图的,所以可以直接使用OpenGL中的函数(以gl开头),为了方便,OF还自己封装了一些常用函数(以of开头)。在上面代码的最后有一个InitNui()函数,在那里面我们会对Kinect进行初始化:

void testApp::InitNui()
{
	m_init_succeeded = false;
	m_nui = NULL;
	
	int count = 0;
	HRESULT hr;

	hr = NuiGetSensorCount(&count);
	if (count <= 0)
	{
		cout<<"No kinect sensor was found!!"<<endl;
		goto Final;
	}

	hr = NuiCreateSensorByIndex(0,&m_nui);
	if (FAILED(hr))
	{
		cout<<"Create Kinect Device Failed!!"<<endl;
		goto Final;
	}

	//We only just need depth data.
	hr = m_nui->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH);

	if (FAILED(hr))
	{
		cout<<"Initialize Kinect Failed!!"<<endl;
		goto Final;
	}

	//Resolution of 320x240 is good enough to reconstruct a 3D model.
	hr = m_nui->NuiImageStreamOpen(NUI_IMAGE_TYPE_DEPTH,NUI_IMAGE_RESOLUTION_320x240,0,2,NULL,&m_depth_stream);
	if (FAILED(hr))
	{
		cout<<"Open Streams Failed!!"<<endl;
		goto Final;
	}

	m_init_succeeded = true;

	Final:
	if (FAILED(hr))
	{
		if (m_nui != NULL)
		{
			m_nui->NuiShutdown();
			m_nui->Release();
			m_nui = NULL;
		}
	}
}

接下来我们需要将每一帧的深度信息保存到我们自己的buffer中,专门写一个函数来做这件事情:

bool testApp::UpdateDepthFrame()
{
	if (!m_init_succeeded)return false;

	HRESULT hr;
	NUI_IMAGE_FRAME image_frame = {0};
	NUI_LOCKED_RECT locked_rect = {0};
		
	hr = m_nui->NuiImageStreamGetNextFrame(m_depth_stream,0,&image_frame);

	//If there's no new frame, we will return immediately.
	if (SUCCEEDED(hr))
	{
		hr = image_frame.pFrameTexture->LockRect(0,&locked_rect,NULL,0);
		if (SUCCEEDED(hr))
		{
			//Copy depth data to our own buffer.
			memcpy(m_depth_buffer,locked_rect.pBits,locked_rect.size);

			image_frame.pFrameTexture->UnlockRect(0);
		}
	



版权声明:本文为AIchipmunk原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。