码迷,mamicode.com
首页 > 其他好文 > 详细

DSO Marginalization

时间:2018-06-11 19:02:40      阅读:337      评论:0      收藏:0      [点我收藏+]

标签:表达式   其他   位置   约束   structs   相同   过程   lin   play   

每当有新的关键帧产生时,都会在 FullSystem::makeKeyFrame() 函数的这里调用函数 EnergyFunctional::insertFrame() 为窗口优化添加新的关键帧,在这里HMbM resize,为新帧创造空间,新帧对应的位置在末尾。

有新的帧加入,就有旧的帧离开。从窗口中剔除旧的关键帧就是边缘化(marginalization)过程。边缘化过程有两步,一步是点的边缘化,一步是帧的边缘化。

整个流程在 FullSystem::makeKeyFrame() 当中,在 EnergyFunctional::insertFrame() 添加新关键帧之前就调用函数 FullSystem::flagFramesForMarginalization() 确定哪些帧是要被边缘化的。

在添加新关键帧,并调用 FullSystem::optimize() 函数完成优化之后,调用一次 FullSystem::flagPointsForRemoval() 确定哪些点是要被边缘化的。随后调用函数 EnergyFunctional::marginalizePointsF() 边缘化点,最后调用函数 FullSystem::marginalizeFrame() 边缘化帧。

这里先介绍如何确定需要边缘化的帧和点,再介绍如何进行边缘化。

1 边缘化变量的确定

1.1 帧的确定

函数 FullSystem::flagFramesForMarginalization() 确定要边缘化的帧。再强调,这是最新关键帧添加之前执行的。

第一个判断条件if(setting_minFrameAge > setting_maxFrames)是系统运行过程中参数变化需要减少关键帧的数量,不考虑。

接下来循环所有关键帧,以条件if( (in < setting_minPointsRemaining *(in+out) || fabs(logf((float)refToFh[0])) > setting_maxLogAffFacInWindow) && ((int)frameHessians.size())-flagged > setting_minFrames) 判断该帧是否会被边缘化。条件是A&&B的形式,B表达式保证留下来帧的数量不少于 setting_minFrames(5)。A是“或”形式,两个条件满足之一即可:1. 内点(in)数量大于占所有点(in + out)的比例小于0.05;2. 该关键帧与最新关键帧的光度变换太大。

如果上面这个条件还不能使得边缘化之后帧的数量小于阈值,那么就再多边缘化一帧。这里是多每一帧(fh)计算一个分数,该分数是 fh 与窗口中其他较老的帧(ffh.target->frameID > lastest->frameID - setting_minFrameAge + 1)之间基线有关的分数。推导一下,如果 fh 与其他较老的帧距离近,基线短,那么 fh 就要被边缘化掉。越老的帧,对当前定位的作用越小。

总结一下,内点不够的帧或与最新关键帧光照相差大的帧要被边缘化掉,如果这个条件还不能把帧的数量下降到满意值,就再多边缘化一个帧,这个帧是与那些窗口中老帧相近的帧。

1.2 点的确定

点的确定在函数 FullSystem::flagPointsForRemoval() 中,遍历窗口中的所有帧、所有点。这个函数中将需要去除的点分为两种,一种直接 drop(EFPointStatus::PS_DROP),一种边缘化处理(EFPointStatus::PS_MARGINALIZE)。

判断条件 A. if(ph->idepth_scaled < 0 || ph->residuals.size()==0) 为真,直接 drop。1. 点的深度为负数,错误的点;2. 在其他帧上看不到,residual 数量为 0,对帧之间状态量的约束无用。

判断条件 B. if(ph->isOOB(fhsToKeepPoints, fhsToMargPoints) || host->flaggedForMarginalization),这个判断条件只是一个粗条件,没有确定该点是 drop 还是边缘化。两个条件满足之一即可。第二个条件,只要是边缘化帧上的点都通过这个判断条件。第一个条件,函数PointHessian::isOOB大致的意思是,如果这个点的 residual 比较多,且在被边缘化帧上的 residual 比较多,或者最近的两个 residual 都不好。

判断条件 C. if(ph->isInlierNew()) 在判断条件 B. 通过之后进行判断。如果 residual 足够多,这个条件通过,这个条件没有通过,就直接设定为 drop。这里的意思就是如果 residual 不够多,这个点与其他帧之间的联系少,没必要把这个信息保存在窗口中。这个条件通过之后,对该点所有 residual 进行 linearize,也就是调用函数 EFResidual::fixLinearizationF() 把 EFResidual::res_toZeroF 设置为 \({\partial r_{21} \over \partial \xi_1}{\delta \xi_1} + {\partial r_{21} \over \partial \xi_2}{\delta \xi_2} + {\partial r_{21} \over \partial C}{\delta C} + {\partial r_{21} \over \partial \rho_1}{\delta \rho_1} + {\partial r_{21} \over \partial l_1}{\delta l_1} + {\partial r_{21} \over \partial l_2}{\delta l_2}\)

判断条件 D. if(ph->idepth_hessian > setting_minIdepthH_marg) 是在上面 linearize 完成之后进行的。PointHessian::idepth_hessian 表达 \(\sum_{i=1}^{N} {\partial r^{(i)} \over \partial \rho^{(j)}}^T {\partial r^{(i)} \over \partial \rho^{(j)}}\)。如果 PointHessian::idepth_hessian 足够大,也就是逆深度对 residual 的“影响”够大,那么就边缘化掉,不够大,直接 drop。

2 边缘化过程

假设所有优化的变量为 \(x\),即 \(x = \begin{bmatrix} \rho & X\end{bmatrix}^T\),包括所有点的逆深度 \(\rho\)\(M \times 1\)),相机内参和帧的状态量 \(X\)\(68 \times 1\))。按照是否边缘化 \(x\) 可以分为两组,\(x = \begin{bmatrix} x_{\alpha} & x_{\beta}\end{bmatrix}^T\),其中 \(x_\alpha\) 表示被 marg 的变量,\(x_\beta\) 表示保留的变量。整理一下优化变量的次序,得到法方程:

\[\begin{align} \begin{bmatrix} H_{\alpha\alpha} & H_{\alpha\beta} \\ H_{\beta\alpha} & H_{\beta\beta}\end{bmatrix} \begin{bmatrix} \delta x_\alpha \\ \delta x_\beta \end{bmatrix} = - \begin{bmatrix} J_\alpha^T r \\ J_\beta^T r \end{bmatrix} \end{align}\]

边缘化之后

\[\begin{align} (H_{\beta\beta} - H_{\beta\alpha}H_{\alpha\alpha}^{-1}H_{\alpha\beta}) \delta x_\beta = -(J_\beta^Tr - H_{\beta\alpha}H_{\alpha\alpha}^{-1}J_\alpha^Tr) \label{eq:after_marg} \end{align}\]

2.1 点

仅边缘化点。于是有

\[\begin{align} x_\alpha &= \rho_\alpha \x_\beta &= \begin{bmatrix} \rho_\beta \\ X \end{bmatrix} \r &= \begin{bmatrix} r_\alpha \\ r_\beta \end{bmatrix}\end{align}\]

整理一下各变量的维度:

\[\begin{align} \begin{matrix} r: N \times 1 & r_\alpha: N_\alpha \times 1 & r_\beta: N_\beta \times 1 \\rho: M \times 1 & \rho_\alpha: M_\alpha \times 1 & \rho_\beta: M_\beta \times 1 \end{matrix} \end{align}\]

求雅克比,需要清楚知道如何分解雅克比,别写错了。

\[\begin{align} J_\alpha &= \frac{\partial r}{\partial x_\alpha} = \frac{\partial r}{\partial \rho_\alpha} = \begin{bmatrix} \frac{\partial r_\alpha}{\partial \rho_\alpha} \\ \frac{\partial r_\beta}{\partial \rho_\alpha} \end{bmatrix} = \begin{bmatrix} \frac{\partial r_\alpha}{\partial \rho_\alpha} \\ 0 \end{bmatrix} \\ J_\beta &= \frac{\partial r}{\partial x_\beta} = \begin{bmatrix} \frac{\partial r}{\partial \rho_\beta} & \frac{\partial r}{\partial X} \end{bmatrix} = \begin{bmatrix} \frac{\partial r_\alpha}{\partial \rho_\beta} & \frac{\partial r_\alpha}{\partial X} \\ \frac{\partial r_\beta}{\partial \rho_\beta} & \frac{\partial r_\beta}{\partial X} \end{bmatrix} \notag \&= \begin{bmatrix} 0 & \frac{\partial r_\alpha}{\partial X} \\ \frac{\partial r_\beta}{\partial \rho_\beta} & \frac{\partial r_\beta}{\partial X} \end{bmatrix} \end{align}\]

求 (\ref{eq:after_marg}) 的两侧以列出方程。

\[\begin{align} H_{\beta\beta} &= J_\beta^TJ_\beta = \begin{bmatrix} 0 & (\frac{\partial r_\beta}{\partial \rho_\beta})^T \\ (\frac{\partial r_\alpha}{\partial X})^T & (\frac{\partial r_\beta}{\partial X})^T \end{bmatrix} \begin{bmatrix} 0 & \frac{\partial r_\alpha}{\partial X} \\ \frac{\partial r_\beta}{\partial \rho_\beta} & \frac{\partial r_\beta}{\partial X} \end{bmatrix} \notag \&= \begin{bmatrix} (\frac{\partial r_\beta}{\partial \rho_\beta})^T \frac{\partial r_\beta}{\partial \rho_\beta} & (\frac{\partial r_\beta}{\partial \rho_\beta})^T \frac{\partial r_\beta}{\partial X} \\ (\frac{\partial r_\beta}{\partial X})^T \frac{\partial r_\beta}{\partial \rho_\beta} & (\frac{\partial r_\alpha}{\partial X})^T \frac{\partial r_\alpha}{\partial X} + (\frac{\partial r_\beta}{\partial X})^T \frac{\partial r_\beta}{\partial X}\end{bmatrix} \end{align}\]

\[\begin{align} J_\beta^T r &= \begin{bmatrix} 0 & (\frac{\partial r_\beta}{\partial \rho_\beta})^T \\ (\frac{\partial r_\alpha}{\partial X})^T & (\frac{\partial r_\beta}{\partial X})^T \end{bmatrix} \begin{bmatrix} r_\alpha \\ r_\beta \end{bmatrix} \notag \&= \begin{bmatrix} (\frac{\partial r_\beta}{\partial \rho_\beta})^T r_\beta \\ (\frac{\partial r_\alpha}{\partial X})^T r_\alpha + (\frac{\partial r_\beta}{\partial X})^T r_\beta\end{bmatrix}\end{align}\]

\[\begin{align} H_{\beta\alpha}H_{\alpha\alpha}^{-1}H_{\alpha\beta} &= J_\beta^TJ_\alpha(J_\alpha^TJ_\alpha)^{-1}J_\alpha^TJ_\beta \notag \&= \begin{bmatrix} 0 & (\frac{\partial r_\beta}{\partial \rho_\beta})^T \\ (\frac{\partial r_\alpha}{\partial X})^T & (\frac{\partial r_\beta}{\partial X})^T \end{bmatrix} \begin{bmatrix} \frac{\partial r_\alpha}{\partial \rho_\alpha} \\ 0 \end{bmatrix} \begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1} \begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T & 0\end{bmatrix} \notag \&\ \ \ \ \ \begin{bmatrix} 0 & \frac{\partial r_\alpha}{\partial X} \\ \frac{\partial r_\beta}{\partial \rho_\beta} & \frac{\partial r_\beta}{\partial X} \end{bmatrix} \notag \&= \begin{bmatrix} 0 \\ (\frac{\partial r_\alpha}{\partial X})^T\frac{\partial r_\alpha}{\partial \rho_\alpha}\end{bmatrix} \begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1} \begin{bmatrix} 0 & (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial X} \end{bmatrix}\notag \&= \begin{bmatrix} 0_{M_\beta\times M_\beta} & 0_{M_\beta\times68} \\ 0_{68\times M_\beta} & {(\frac{\partial r_\alpha}{\partial X})^T\frac{\partial r_\alpha}{\partial \rho_\alpha}\begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1}(\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial X}}_{68\times68} \end{bmatrix}\end{align}\]

\[\begin{align} H_{\beta\alpha}H_{\alpha\alpha}^{-1}J_\alpha^Tr &= J_\beta^TJ_\alpha(J_\alpha^TJ_\alpha)^{-1}J_\alpha^Tr \notag \&= \begin{bmatrix} 0 & (\frac{\partial r_\beta}{\partial \rho_\beta})^T \\ (\frac{\partial r_\alpha}{\partial X})^T & (\frac{\partial r_\beta}{\partial X})^T \end{bmatrix} \begin{bmatrix} \frac{\partial r_\alpha}{\partial \rho_\alpha} \\ 0 \end{bmatrix} \begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1} \begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T & 0\end{bmatrix} \notag \&\ \ \ \ \ \begin{bmatrix} r_\alpha \\ r_\beta \end{bmatrix} \notag \&= \begin{bmatrix} 0_{M_\beta \times 1} \\ {(\frac{\partial r_\alpha}{\partial X})^T\frac{\partial r_\alpha}{\partial \rho_\alpha}\begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1}(\frac{\partial r_\alpha}{\partial \rho_\alpha})^T r_\alpha}_{68\times1} \end{bmatrix}\end{align}\]

得到结果

\[\begin{align} &H_{\beta\beta} - H_{\beta\alpha}H_{\alpha\alpha}^{-1}H_{\alpha\beta} \notag \\ &= \begin{bmatrix} {(\frac{\partial r_\beta}{\partial \rho_\beta})^T \frac{\partial r_\beta}{\partial \rho_\beta}}_{M_\beta\times M_\beta} & {(\frac{\partial r_\beta}{\partial \rho_\beta})^T \frac{\partial r_\beta}{\partial X}}_{M_\beta\times68} \\ {(\frac{\partial r_\beta}{\partial X})^T \frac{\partial r_\beta}{\partial \rho_\beta}}_{68\times M_\beta} & (\frac{\partial r_\alpha}{\partial X})^T \frac{\partial r_\alpha}{\partial X} + (\frac{\partial r_\beta}{\partial X})^T \frac{\partial r_\beta}{\partial X} - {(\frac{\partial r_\alpha}{\partial X})^T\frac{\partial r_\alpha}{\partial \rho_\alpha}\begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1}(\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial X}}_{68\times68} \end{bmatrix} \end{align}\]

\[\begin{align} &J_\beta^Tr - H_{\beta\alpha}H_{\alpha\alpha}^{-1}J_\alpha^Tr \notag \&= \begin{bmatrix} {(\frac{\partial r_\beta}{\partial \rho_\beta})^T r_\beta}_{M_\beta \times 1} \\ {(\frac{\partial r_\alpha}{\partial X})^T r_\alpha + (\frac{\partial r_\beta}{\partial X})^T r_\beta - (\frac{\partial r_\alpha}{\partial X})^T\frac{\partial r_\alpha}{\partial \rho_\alpha}\begin{bmatrix} (\frac{\partial r_\alpha}{\partial \rho_\alpha})^T\frac{\partial r_\alpha}{\partial \rho_\alpha} \end{bmatrix}^{-1}(\frac{\partial r_\alpha}{\partial \rho_\alpha})^T r_\alpha}_{68\times1} \end{bmatrix} \end{align}\]

第二行得到的就是帧状态量与内参(\(X\))的先验,加到HMbM中去。(为什么是加?因为 \(X\) 本来就有先验,这个先验是 \(\rho_\alpha\) 给的。“加”,先验概率是乘的方式累积上去,高斯概率的乘法,在误差这里就是“加”的方式。)这里的计算和《DSO windowed optimization 公式》一致,所以代码相同,只是选择了部分点参与计算。具体的计算参考《DSO windowed optimization 代码 (2)》《DSO windowed optimization 代码 (3)》

第一行得到的是保留点深度(\(\rho_\beta\))的先验,为什么这个先验没有保存下来呢?不过,如果保存又能如何保存?

2.2 帧

这个好复杂,看OKVIS论文,对照代码,再写。

DSO Marginalization

标签:表达式   其他   位置   约束   structs   相同   过程   lin   play   

原文地址:https://www.cnblogs.com/JingeTU/p/9168315.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!