1.opencv中3D点根据相机参数投影成2D点,直接上代码:
输入:3D坐标+旋转,平移矩阵(所谓的相机姿态)+相机内参(包括畸变矩阵)
输出:2D坐标
(1.投影函数:根据相机参数(包括畸变矩阵),把3D点投影成2D点
2.
搞清楚R和t的具体含义。
R的第i行 表示摄像机坐标系中的第i个坐标轴方向的单位向量在世界坐标系里的坐标;
R的第i列 表示世界坐标系中的第i个坐标轴方向的单位向量在摄像机坐标系里的坐标;
t 表示世界坐标系的原点在摄像机坐标系的坐标;
-R的转置 * t 表示摄像机坐标系的原点在世界坐标系的坐标
。
)
#include “opencv2/core/core.hpp”
#include “opencv2/imgproc/imgproc.hpp”
#include “opencv2/calib3d/calib3d.hpp”
#include “opencv2/highgui/highgui.hpp”
#include <iostream>
#include <string>
std::vector<cv::Point3d> Generate3DPoints();
int main(int argc, char* argv[])
{
// Read 3D points
std::vector<cv::Point3d> objectPoints = Generate3DPoints();
std::vector<cv::Point2d> imagePoints;
cv::Mat intrisicMat(3, 3, cv::DataType<double>::type); // Intrisic matrix
intrisicMat.at<double>(0, 0) = 1.6415318549788924e+003;
intrisicMat.at<double>(1, 0) = 0;
intrisicMat.at<double>(2, 0) = 0;
intrisicMat.at<double>(0, 1) = 0;
intrisicMat.at<double>(1, 1) = 1.7067753507885654e+003;
intrisicMat.at<double>(2, 1) = 0;
intrisicMat.at<double>(0, 2) = 5.3262822453148601e+002;
intrisicMat.at<double>(1, 2) = 3.8095355839052968e+002;
intrisicMat.at<double>(2, 2) = 1;
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = -3.9277902400761393e-002;
rVec.at<double>(1) = 3.7803824407602084e-002;
rVec.at<double>(2) = 2.6445674487856268e-002;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) = 2.1158489381208221e+000;
tVec.at<double>(1) = -7.6847683212704716e+000;
tVec.at<double>(2) = 2.6169795190294256e+001;
cv::Mat distCoeffs(5, 1, cv::DataType<double>::type); // Distortion vector
distCoeffs.at<double>(0) = -7.9134632415085826e-001;
distCoeffs.at<double>(1) = 1.5623584435644169e+000;
distCoeffs.at<double>(2) = -3.3916502741726508e-002;
distCoeffs.at<double>(3) = -1.3921577146136694e-002;
distCoeffs.at<double>(4) = 1.1430734623697941e+002;
std::cout << “Intrisic matrix: ” << intrisicMat << std::endl << std::endl;
std::cout << “Rotation vector: ” << rVec << std::endl << std::endl;
std::cout << “Translation vector: ” << tVec << std::endl << std::endl;
std::cout << “Distortion coef: ” << distCoeffs << std::endl << std::endl;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
std::cout << “Press any key to exit.”;
std::cin.ignore();
std::cin.get();
return 0;
}
std::vector<cv::Point3d> Generate3DPoints()
{
std::vector<cv::Point3d> points;
double x, y, z;
x = .5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = .5;
points.push_back(cv::Point3d(x, y, z));
for (unsigned int i = 0; i < points.size(); ++i)
{
std::cout << points[i] << std::endl << std::endl;
}
return points;
}
2.opencv中
solvePnP函数计算相机姿态:
http://blog.csdn.net/aptx704610875/article/details/48915149
或者http://www.cnblogs.com/gaoxiang12/p/4659805.html
输入:3D坐标,
2D坐标
+相机内参(包括畸变矩阵)
输出:
相机姿态(旋转平移矩阵)
3.
超级好的一个博客!:http://www.cnblogs.com/gaoxiang12/p/4652478.html(这个博客还包括根据RGB-D图像,点云拼接(三维重建),很详细!)
(涉及到了三维点云,除了opencv,还用到PCL)
原理
+代码实现从(2D->3D)
输入:2D坐标,相机内参(包括畸变矩阵),
相机姿态
输出:
3D坐标
(1,2,3其实就是3D,2D坐标,相机姿态(在已经知道相机参数情况下),知道其中两个可以求得另外一个!)
4.用kinect1来拍摄三维模型,很重要的一步就是先要标定相机(因为深度相机与彩色相机不是一个位置,存在偏差):
http://blog.csdn.net/aichipmunk/article/details/9264703
(补充:
①保存拍摄的深度彩色图,可以用openni的代码,也可以更改pcl kinfu代码,openni可以看我pcl配置那一篇博客
②拍好深度,彩色图后也可以用matlab工具箱来标定:http://blog.csdn.net/wangxiaokun671903/article/details/38925553(可以获得深度,彩色相机内参外参)
③获得内外参数后,利用本文的1,2,3来配准深度彩色图片)