目录
一、三角测量
方法一:SVD分解法的推导
我们假设在相机坐标1和2存在这样的重投影关系:
[
u
1
v
1
1
]
=
1
s
1
K
T
1
w
P
w
\begin{bmatrix} u_1\\v_1\\1\end{bmatrix}=\frac 1 {s_1}KT_{1w}P_w
⎣
⎡
u
1
v
1
1
⎦
⎤
=
s
1
1
K
T
1
w
P
w
[
u
2
v
2
1
]
=
1
s
2
K
T
2
w
P
w
\begin{bmatrix} u_2\\v_2\\1\end{bmatrix}=\frac 1 {s_2}KT_{2w}P_w
⎣
⎡
u
2
v
2
1
⎦
⎤
=
s
2
1
K
T
2
w
P
w
如果只考虑相机位姿1和2的相对关系,则又可以写成:
{
[
u
1
v
1
1
]
=
1
s
1
K
⋅
[
I
∣
0
]
⋅
P
w
,
P
1
=
P
w
=
[
X
Y
Z
1
]
[
u
2
v
2
1
]
=
1
s
2
K
⋅
[
R
21
∣
t
]
⋅
P
1
(2)
\left \{ \begin{aligned} \begin{bmatrix} u_1\\v_1\\1\end{bmatrix}&=\frac 1 {s_1}K\cdot\space[\space I\space|\space 0 \space]\space \cdot P_w\quad,\space P_1 = P_w =\begin{bmatrix} X\\Y\\Z\\1\end{bmatrix} \\~\\ \begin{bmatrix} u_2\\v_2\\1\end{bmatrix} &=\frac 1 {s_2}K\cdot\space[\space R_{21}\space|\space t \space]\space \cdot P_1 \end{aligned} \right.\tag{2}
⎩
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎧
⎣
⎡
u
1
v
1
1
⎦
⎤
⎣
⎡
u
2
v
2
1
⎦
⎤
=
s
1
1
K
⋅
[
I
∣
0
]
⋅
P
w
,
P
1
=
P
w
=
⎣
⎢
⎢
⎡
X
Y
Z
1
⎦
⎥
⎥
⎤
=
s
2
1
K
⋅
[
R
2
1
∣
t
]
⋅
P
1
(
2
)
这里,一般我们是知道相机内参
K
K
K
的,所以为了忽略相机内参,我们令映射矩阵(3X4):
p
r
1
=
K
⋅
[
I
∣
0
]
p
r
2
=
K
⋅
[
R
21
∣
t
]
pr_1 = K\cdot\space[\space I\space|\space 0 \space]\\~\\ pr_2 = K\cdot\space[\space R_{21}\space|\space t \space]
p
r
1
=
K
⋅
[
I
∣
0
]
p
r
2
=
K
⋅
[
R
2
1
∣
t
]
写成行向量的形式
p
r
1
=
[
p
r
1
,
1
p
r
1
,
2
p
r
1
,
3
]
,
p
r
2
=
[
p
r
2
,
1
p
r
2
,
2
p
r
2
,
3
]
pr_1 = \begin{bmatrix} pr_{1,1}\\pr_{1,2}\\pr_{1,3}\end{bmatrix}, pr_2 = \begin{bmatrix} pr_{2,1}\\pr_{2,2}\\pr_{2,3}\end{bmatrix}
p
r
1
=
⎣
⎡
p
r
1
,
1
p
r
1
,
2
p
r
1
,
3
⎦
⎤
,
p
r
2
=
⎣
⎡
p
r
2
,
1
p
r
2
,
2
p
r
2
,
3
⎦
⎤
投影映射方程变为:
[
s
1
u
1
s
1
v
1
s
1
]
=
[
p
r
1
,
1
p
r
1
,
2
p
r
1
,
3
]
P
1
[
s
2
u
2
s
2
v
2
s
2
]
=
[
p
r
2
,
1
p
r
2
,
2
p
r
2
,
3
]
P
1
\begin{bmatrix} s_1u_1\\s_1v_1\\s_1\end{bmatrix}=\begin{bmatrix} pr_{1,1}\\pr_{1,2}\\pr_{1,3}\end{bmatrix}P_1\\~\\ \begin{bmatrix} s_2u_2\\s_2v_2\\s_2\end{bmatrix}=\begin{bmatrix} pr_{2,1}\\pr_{2,2}\\pr_{2,3}\end{bmatrix}P_1
⎣
⎡
s
1
u
1
s
1
v
1
s
1
⎦
⎤
=
⎣
⎡
p
r
1
,
1
p
r
1
,
2
p
r
1
,
3
⎦
⎤
P
1
⎣
⎡
s
2
u
2
s
2
v
2
s
2
⎦
⎤
=
⎣
⎡
p
r
2
,
1
p
r
2
,
2
p
r
2
,
3
⎦
⎤
P
1
我们观察两个方程,可以发现能用第三行消去深度参数,所以有:
[
p
r
1
,
3
⋅
P
1
⋅
u
1
p
r
1
,
3
⋅
P
1
⋅
v
1
]
=
[
p
r
1
,
1
⋅
P
1
p
r
1
,
2
⋅
P
1
]
[
p
r
2
,
3
⋅
P
1
⋅
u
2
p
r
2
,
3
⋅
P
1
⋅
v
2
]
=
[
p
r
2
,
1
⋅
P
1
p
r
2
,
2
⋅
P
1
]
\begin{bmatrix} pr_{1,3}\cdot P_1\cdot u_1\\pr_{1,3}\cdot P_1\cdot v_1\end{bmatrix}=\begin{bmatrix} pr_{1,1}\cdot P_1\\pr_{1,2}\cdot P_1\end{bmatrix}\\~\\ \begin{bmatrix} pr_{2,3}\cdot P_1\cdot u_2\\pr_{2,3}\cdot P_1\cdot v_2\end{bmatrix}=\begin{bmatrix} pr_{2,1}\cdot P_1\\pr_{2,2}\cdot P_1\end{bmatrix}\\~\\
[
p
r
1
,
3
⋅
P
1
⋅
u
1
p
r
1
,
3
⋅
P
1
⋅
v
1
]
=
[
p
r
1
,
1
⋅
P
1
p
r
1
,
2
⋅
P
1
]
[
p
r
2
,
3
⋅
P
1
⋅
u
2
p
r
2
,
3
⋅
P
1
⋅
v
2
]
=
[
p
r
2
,
1
⋅
P
1
p
r
2
,
2
⋅
P
1
]
我们把要求的P1提出来,就可以得到线性方程:
[
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
]
P
1
=
0
[
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
]
P
1
=
0
\begin{bmatrix} pr_{1,3}\cdot u_1-pr_{1,1}\\pr_{1,3}\cdot v_1-pr_{1,2}\end{bmatrix}P_1=0\\~\\ \begin{bmatrix} pr_{2,3}\cdot u_2-pr_{2,1}\\pr_{2,3}\cdot v_2-pr_{2,2}\end{bmatrix}P_1=0\\~\\
[
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
]
P
1
=
0
[
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
]
P
1
=
0
即:
[
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
]
P
1
=
0
\begin{bmatrix} pr_{1,3}\cdot u_1-pr_{1,1}\\pr_{1,3}\cdot v_1-pr_{1,2}\\pr_{2,3}\cdot u_2-pr_{2,1}\\pr_{2,3}\cdot v_2-pr_{2,2}\end{bmatrix} P_1=0
⎣
⎢
⎢
⎡
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
⎦
⎥
⎥
⎤
P
1
=
0
我们利用SVD分解的方法来解这个方程,求出P1,但我们发现,这其实是一个超定方程,所以利用SVD分解来得到一个最小二乘解,(可以回顾下矩阵论的知识,超定方程怎么求最小二乘解);
我们令:
A
=
[
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
]
A = \begin{bmatrix} pr_{1,3}\cdot u_1-pr_{1,1}\\pr_{1,3}\cdot v_1-pr_{1,2}\\pr_{2,3}\cdot u_2-pr_{2,1}\\pr_{2,3}\cdot v_2-pr_{2,2}\end{bmatrix}
A
=
⎣
⎢
⎢
⎡
p
r
1
,
3
⋅
u
1
−
p
r
1
,
1
p
r
1
,
3
⋅
v
1
−
p
r
1
,
2
p
r
2
,
3
⋅
u
2
−
p
r
2
,
1
p
r
2
,
3
⋅
v
2
−
p
r
2
,
2
⎦
⎥
⎥
⎤
对A进行奇异值分解:
A
4
×
4
=
U
W
V
T
A_{4\times4} = UWV^T
A
4
×
4
=
U
W
V
T
即
A
=
U
⋅
[
σ
1
σ
2
σ
3
σ
4
]
[
V
1
V
2
V
3
V
4
]
T
,
σ
1
≥
σ
2
≥
σ
3
≥
σ
4
≥
0
A= U\cdot \begin{bmatrix} \sigma_1 \\&\sigma_2\\& &\sigma_3\\ &&&\sigma_4\end{bmatrix}[V1\quad V2\quad V3 \quad V4] ^T,\sigma_1 \geq \sigma_2 \geq \sigma_3 \geq \sigma_4 \geq 0
A
=
U
⋅
⎣
⎢
⎢
⎡
σ
1
σ
2
σ
3
σ
4
⎦
⎥
⎥
⎤
[
V
1
V
2
V
3
V
4
]
T
,
σ
1
≥
σ
2
≥
σ
3
≥
σ
4
≥
0
根据
A
x
=
0
Ax=0
A
x
=
0
最小二乘法求解方法,其最小二乘解即为
A
T
A
A^TA
A
T
A
最小特征值的特征向量。
参考:
证明AX=0的最小二乘解是ATA最小特征值对应的特征向量
所以我们取SVD分解得到的V矩阵最后一列作为
P
1
P_1
P
1
的解,但我们还需要归一化以后才能使用。
方法二:最小二乘法求解
我们知道的投影关系:(P1和P2分别是在相机1和相机2坐标系下的点)
[
u
1
v
1
1
]
=
1
s
1
K
P
1
\begin{bmatrix} u_1\\v_1\\1\end{bmatrix}=\frac 1 {s_1}KP_1
⎣
⎡
u
1
v
1
1
⎦
⎤
=
s
1
1
K
P
1
[
u
2
v
2
1
]
=
1
s
2
K
P
2
\begin{bmatrix} u_2\\v_2\\1\end{bmatrix}=\frac 1 {s_2}KP_2
⎣
⎡
u
2
v
2
1
⎦
⎤
=
s
2
1
K
P
2
所以可以推出:
K
−
1
[
u
1
v
1
1
]
=
1
s
1
P
1
=
e
1
K
−
1
[
u
2
v
2
1
]
=
1
s
2
P
2
=
e
2
K^{-1}\begin{bmatrix} u_1\\v_1\\1\end{bmatrix}=\frac 1 {s_1}P_1 = e_1\\~\\ K^{-1} \begin{bmatrix} u_2\\v_2\\1\end{bmatrix}=\frac 1 {s_2}P_2=e_2
K
−
1
⎣
⎡
u
1
v
1
1
⎦
⎤
=
s
1
1
P
1
=
e
1
K
−
1
⎣
⎡
u
2
v
2
1
⎦
⎤
=
s
2
1
P
2
=
e
2
其中相机坐标系下坐标
P
1
P_1
P
1
和
P
2
P_2
P
2
存在关系:
P
2
=
R
21
⋅
P
1
+
t
21
P_2 = R_{21}\cdot P_1+t_{21}
P
2
=
R
2
1
⋅
P
1
+
t
2
1
所以联合上面的式子我们可以得到:
K
−
1
[
u
2
v
2
1
]
=
1
s
2
(
s
1
⋅
R
21
⋅
K
−
1
[
u
1
v
1
1
]
+
t
21
)
K^{-1} \begin{bmatrix} u_2\\v_2\\1\end{bmatrix}=\frac 1 {s_2}(s_1\cdot R_{21}\cdot K^{-1}\begin{bmatrix} u_1\\v_1\\1\end{bmatrix}+t_{21} )
K
−
1
⎣
⎡
u
2
v
2
1
⎦
⎤
=
s
2
1
(
s
1
⋅
R
2
1
⋅
K
−
1
⎣
⎡
u
1
v
1
1
⎦
⎤
+
t
2
1
)
简单表示,即:
e
2
=
1
s
2
(
s
1
⋅
R
21
⋅
e
1
+
t
21
)
⟹
s
2
⋅
e
2
=
(
s
1
⋅
R
21
⋅
e
1
+
t
21
)
⟹
[
−
R
21
⋅
e
1
e
2
]
[
s
1
s
2
]
=
t
21
e_2=\frac 1 {s_2}(s_1\cdot R_{21}\cdot e_1+t_{21} )\\~\\ \Longrightarrow s_2\cdot e_2=(s_1\cdot R_{21}\cdot e_1+t_{21} )\\~\\ \Longrightarrow \begin{bmatrix} -R_{21}\cdot e_1&e_2 \end{bmatrix} \begin{bmatrix} s_1 \\ s_2\end{bmatrix} = t_{21}
e
2
=
s
2
1
(
s
1
⋅
R
2
1
⋅
e
1
+
t
2
1
)
⟹
s
2
⋅
e
2
=
(
s
1
⋅
R
2
1
⋅
e
1
+
t
2
1
)
⟹
[
−
R
2
1
⋅
e
1
e
2
]
[
s
1
s
2
]
=
t
2
1
这样我们就得到了线性方程的形式:
A
x
=
b
Ax = b
A
x
=
b
这里:
A
=
[
−
R
21
⋅
e
1
e
2
]
,
x
=
[
s
1
s
2
]
A=\begin{bmatrix} -R_{21}\cdot e_1&e_2 \end{bmatrix} ,x=\begin{bmatrix} s_1 \\ s_2\end{bmatrix}
A
=
[
−
R
2
1
⋅
e
1
e
2
]
,
x
=
[
s
1
s
2
]
我们根据矩阵论的知识可以知道,一个无解方程组存在唯一的最小二乘解,一般我们会对A矩阵进行BC满秩分解:
若
r
a
n
k
(
A
)
=
r
,
A
m
×
n
=
B
m
×
r
C
r
×
n
若
r
=
m
(
行
满
秩
)
,
A
=
E
A
若
r
=
n
(
列
满
秩
)
,
A
=
A
E
若rank(A) = r,A_{m\times n} = B_{m\times r}C_{r\times n}\\~\\ 若r = m(行满秩),A = EA\\~\\ 若r = n (列满秩),A=AE
若
r
a
n
k
(
A
)
=
r
,
A
m
×
n
=
B
m
×
r
C
r
×
n
若
r
=
m
(
行
满
秩
)
,
A
=
E
A
若
r
=
n
(
列
满
秩
)
,
A
=
A
E
则A的广义逆矩阵为:
A
+
=
C
T
(
C
C
T
)
−
1
(
B
T
B
)
−
1
B
T
A^{+} = C^T(CC^T)^{-1}(B^TB)^{-1}B^T\\~\\
A
+
=
C
T
(
C
C
T
)
−
1
(
B
T
B
)
−
1
B
T
特殊情况:
行
满
秩
:
A
+
=
A
T
(
A
A
T
)
−
1
列
满
秩
:
A
+
=
(
A
T
A
)
−
1
A
T
行满秩:A^+ = A^T(AA^T)^{-1}\\~\\列满秩:A^+=(A^TA)^{-1}A^T
行
满
秩
:
A
+
=
A
T
(
A
A
T
)
−
1
列
满
秩
:
A
+
=
(
A
T
A
)
−
1
A
T
我们的最小二乘解就是:
x
=
A
+
b
x = A^+b
x
=
A
+
b
(不进行推导了,可自行复习矩阵论的知识)
在这里我们的A是列满秩(3X2的矩阵),对于无解方程组,其最小二乘解:
[
s
1
s
2
]
=
(
A
T
A
)
−
1
A
T
⋅
t
21
\begin{bmatrix} s_1 \\ s_2\end{bmatrix}= (A^TA)^{-1}A^T\cdot t_{21}
[
s
1
s
2
]
=
(
A
T
A
)
−
1
A
T
⋅
t
2
1
s
1
s_1
s
1
和
s
2
s_2
s
2
分别是在两个相机坐标系下的深度;
二、ORB_SLAM2 三角测量源码:
void Initializer::Triangulate(const cv::KeyPoint &kp1, const cv::KeyPoint &kp2, const cv::Mat &P1, const cv::Mat &P2, cv::Mat &x3D)
{
cv::Mat A(4,4,CV_32F);
A.row(0) = kp1.pt.x*P1.row(2)-P1.row(0);
A.row(1) = kp1.pt.y*P1.row(2)-P1.row(1);
A.row(2) = kp2.pt.x*P2.row(2)-P2.row(0);
A.row(3) = kp2.pt.y*P2.row(2)-P2.row(1);
cv::Mat u,w,vt;
cv::SVD::compute(A,w,u,vt,cv::SVD::MODIFY_A| cv::SVD::FULL_UV);
x3D = vt.row(3).t();//取SVD分解得到的V矩阵最后一列作为$P_1$的解
x3D = x3D.rowRange(0,3)/x3D.at<float>(3);
}
三、利用Eigen源码实现三角测量:
方法一:SVD分解法
为了更好的理解过程,利用Eigen实现了三角测量:
Eigen::Vector3d triangulatedByEigenSVD(Point2f pixel_1,Point2f pixel_2,Mat R,Mat t,Mat &K)
{
Eigen::Matrix3d K_eigen;
cv::cv2eigen(K,K_eigen);//eigen 类型的K内参 需要包含 #include <opencv2/core/eigen.hpp>
Eigen::Matrix<double,3,4> T_1,T_21;
T_1<< 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0;
T_21<< R.at<double>(0,0), R.at<double>(0,1), R.at<double>(0,2), t.at<double>(0,0),
R.at<double>(1,0), R.at<double>(1,1), R.at<double>(1,2), t.at<double>(1,0),
R.at<double>(2,0), R.at<double>(2,1), R.at<double>(2,2), t.at<double>(2,0);
Eigen::Matrix<double,3,4> ProjectMatrix_1,ProjectMatrix_2;
ProjectMatrix_1 = K_eigen * T_1;
ProjectMatrix_2 = K_eigen * T_21;
Eigen::Matrix4d A;
A.row(0) = ProjectMatrix_1.row(2)*pixel_1.x - ProjectMatrix_1.row(0);
A.row(1) = ProjectMatrix_1.row(2)*pixel_1.y - ProjectMatrix_1.row(1);
A.row(2) = ProjectMatrix_2.row(2)*pixel_2.x - ProjectMatrix_2.row(0);
A.row(3) = ProjectMatrix_2.row(2)*pixel_2.y - ProjectMatrix_2.row(1);
Eigen::JacobiSVD<Eigen::MatrixXd> svd(A, Eigen::ComputeThinU | Eigen::ComputeThinV);
Eigen::Matrix4d U = svd.matrixU();
Eigen::Matrix4d V = svd.matrixV();
Eigen::Vector4d S = svd.singularValues();
Eigen::Vector4d p_w = V.col(3);
p_w /= p_w(3,0);
return Eigen::Vector3d(p_w(0),p_w(1),p_w(2));
}
方法二:最小二乘法求解(速度最快)
Eigen::Vector3d triangulatedByEigenLeastSquare(Point2f pixel_1, Point2f pixel_2, Mat R, Mat t, Mat &K)
{
Eigen::Vector3d point_1,point_2;
point_1<<pixel_1.x,pixel_1.y,1;//齐次坐标
point_2<<pixel_2.x,pixel_2.y,1;
Eigen::Matrix3d K_eigen,R_eigen;
Eigen::Vector3d t_eigen;
cv::cv2eigen(K,K_eigen);//eigen 类型的K内参
cv::cv2eigen(R,R_eigen);//eigen 类型的R
cv::cv2eigen(t,t_eigen);//eigen 类型的t
Eigen::Matrix<double,3,2> A;
A.block(0,0,3,1) = -R_eigen*K_eigen.inverse()*point_1;
A.block(0,1,3,1) = K_eigen.inverse()*point_2;
//行满秩 最小二乘解:x = A^T * (A*A^T)^-1 b
//列满秩 最小二乘解:x = (A^T*A)^-1 * A^T b
Eigen::Vector2d d =(A.transpose()*A).inverse()*A.transpose()*t_eigen;//d[0],d[1]就是相机坐标系1 和 2下特征点深度
Eigen::Vector3d p1 = d[0]*K_eigen.inverse()*point_1;
//Eigen::Vector3d p2 = d[1]*K_eigen.inverse()*point_2;//p_2也能求出
return p1;
}
方法三:利用OpenCV自带函数
参考源码:视觉SLAM十四讲–高翔
//像素坐标系到相机坐标系
Point3d pixel2camera( const Mat &K,const Point2d &p,const double depth = 1.0 )
{
return Point3d
(
depth*(p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),
depth*(p.y - K.at<double>(1, 2)) / K.at<double>(1, 1),
depth
);
}
void triangulation(vector<Point2f> pixel_1,vector<Point2f> pixel_2,Mat R,Mat t,vector< Point3d >& points)
{
Mat projMatr1 = (Mat_<float> (3,4) <<
1,0,0,0,
0,1,0,0,
0,0,1,0);
Mat projMatr2 = (Mat_<float> (3,4) <<
R.at<double>(0,0), R.at<double>(0,1), R.at<double>(0,2), t.at<double>(0,0),
R.at<double>(1,0), R.at<double>(1,1), R.at<double>(1,2), t.at<double>(1,0),
R.at<double>(2,0), R.at<double>(2,1), R.at<double>(2,2), t.at<double>(2,0)
);
vector<Point2f> projPoints1,projPoints2;
for(size_t i =0;i<pixel_1.size();i++)
{
Point3d camera_1 = pixel2camera(K,pixel_1[i],1.0);
Point3d camera_2 = pixel2camera(K,pixel_2[i],1.0);
projPoints1.push_back(Point2f(camera_1.x,camera_1.y));
projPoints2.push_back(Point2f(camera_2.x,camera_2.y));
}
Mat points4D;
cv::triangulatePoints( projMatr1, projMatr2, projPoints1, projPoints2, points4D );
// 转换成非齐次坐标
for ( int i=0; i<points4D.cols; i++ )
{
Mat x = points4D.col(i);
x /= x.at<float>(3,0); // 归一化
Point3d p (
x.at<float>(0,0),
x.at<float>(1,0),
x.at<float>(2,0)
);
points.push_back( p );
}
}
四、对求解的R t筛选的方法:
我们利用求解的两个相机坐标系下坐标点深度的正负的比例来判断R t 是否是最佳的(pixel_1是匹配特征点在相机1的像素坐标,pixel_2是在相机2中的):
bool checkRt(Eigen::Matrix3d R_eigen, Eigen::Vector3d t_eigen, vector<Point2f> pixel_1, vector<Point2f> pixel_2)
{
Mat R,t;
cv::eigen2cv(R_eigen,R);
cv::eigen2cv(t_eigen,t);
vector<Eigen::Vector3d> P_1,P_2;
for(auto i=0;i<pixel_1.size();i++)
{
P_1.push_back(triangulatedByEigenSVD(pixel_1[i],pixel_2[i],R,t,K));
//P2 = RP1 + t
P_2.push_back(R_eigen * P_1[i] + t_eigen);
}
int good_num_p1 = 0,good_num_p2 = 0;
for(auto ptr_1:P_1)
{
if (ptr_1(2,0) > 0)
{
good_num_p1++;
}
}
for(auto ptr_2:P_2)
{
if (ptr_2(2,0) > 0)
{
good_num_p2++;
}
}
//cout<<good_num_p1<<" "<<good_num_p2;
//设定阈值
double posibility = (good_num_p1/(double)P_1.size())*(good_num_p2/(double)P_2.size());
if(posibility > 0.7)
return true;
else
return false;
}
使用演示:
首先可以利用本质矩阵opencv 解出的essential_matrix,进行SVD分解求出四组 R , t:
Eigen::Matrix3d A;
cv::cv2eigen(essential_matrix,A);
Eigen::JacobiSVD<Eigen::MatrixXd> svd(A, Eigen::ComputeThinU | Eigen::ComputeThinV);
auto U = svd.matrixU();
auto V = svd.matrixV();
auto S = svd.singularValues();
cout<<"A = \n"<<A<<endl;
cout<<"S = \n"<<S<<endl;
cout<<"U = \n"<<U<<endl;
cout<<"VT = \n"<<V.transpose()<<endl;
auto A_souce = U*S.asDiagonal()*V.transpose();
cout<<"A constructed by U*S*VT is \n"<<A_souce<<endl;
Eigen::Matrix3d Z,W;
Z<<0,1,0,
-1,0,0,
0,0,0;
W<<0,-1,0,
1,0,0,
0,0,1;
//四种情况,需要利用深度信息(正负)来判断和筛选。
Eigen::Matrix3d t_eigen_hat_1 = U*Z*U.transpose(); // 也可利用 U.col(2).transpose()
Eigen::Matrix3d t_eigen_hat_2 = U*Z.transpose()*U.transpose(); // 也可利用 -U.col(2).transpose()
Eigen::Matrix3d R_eigen_1 = U*W*V.transpose();
Eigen::Matrix3d R_eigen_2 = U*W.transpose()*V.transpose();
//----------转换为3X1的向量------
Eigen::Vector3d t_eigen_1,t_eigen_2;
t_eigen_1<<-t_eigen_hat_1(1,2),t_eigen_hat_1(0,2),-t_eigen_hat_1(0,1);
t_eigen_2<<-t_eigen_hat_2(1,2),t_eigen_hat_2(0,2),-t_eigen_hat_2(0,1);
再利用checkRt函数进行判断筛选:
bool index_1 = checkRt(R_eigen_1,t_eigen_1,pixel_1,pixel_2);
bool index_2 = checkRt(R_eigen_1,t_eigen_2,pixel_1,pixel_2);
bool index_3 = checkRt(R_eigen_2,t_eigen_1,pixel_1,pixel_2);
bool index_4 = checkRt(R_eigen_2,t_eigen_2,pixel_1,pixel_2);
cout<<"The R1 t1 is score:"<<index_1<<endl;
cout<<"The R1 t2 is score:"<<index_2<<endl;
cout<<"The R2 t1 is score:"<<index_3<<endl;
cout<<"The R2 t2 is score:"<<index_4<<endl;
cout<<"The best Rotation and translation is :\n";
if(index_1 == 1)
{
cout<<"R_eigen_1:\n"<<R_eigen_1<<endl;
cout<<"t_eigen_1:\n"<<t_eigen_1.transpose()<<endl;
}
else if(index_2 == 1)
{
cout<<"R_eigen_1:\n"<<R_eigen_1<<endl;
cout<<"t_eigen_2:\n"<<t_eigen_2.transpose()<<endl;
}
else if(index_3 == 1)
{
cout<<"R_eigen_2:\n"<<R_eigen_2<<endl;
cout<<"t_eigen_1:\n"<<t_eigen_1.transpose()<<endl;
}
else if(index_4 == 1)
{
cout<<"R_eigen_2:\n"<<R_eigen_2<<endl;
cout<<"t_eigen_2:\n"<<t_eigen_2.transpose()<<endl;
}
运行结果:
The R1 t1 is score:1
The R1 t2 is score:0
The R2 t1 is score:0
The R2 t2 is score:0
The best Rotation and translation is :
R_eigen_1:
0.996291 -0.0532112 0.0676278
0.0503369 0.997784 0.0435187
-0.0697936 -0.0399531 0.996761
t_eigen_1:
-0.9663 -0.182217 0.18183