Vins-mono论文与代码阅读(一)overview+feature_tracker

Vins mono 是VIO方法中必读的经典,我在最初接触SLAM时就有简单阅读,但对其中一些细节一直不甚了解。视觉SLAM相比于激光更加复杂和难理解,这也是我在面试回答不出来的重灾区。这两天想用几篇文章对论文和代码做一个细节的,深入的学习。

整体框架

首先看一下算法的整体框架。Vinsmono论文中的流程图如下:

输入是相机(30Hz)和IMU(100Hz)

从代码角度,ROS launch文件启动了三个节点。相当于三个进程。每个进程有接收和发布的topic

节点 订阅topic 发布topic
feature_tracker IMAGE_TOPIC "feature", "feature_img"
vins_estimator IMU_TOPIC, "/feature_tracker/feature", "/feature_tracker/restart", "pose_graph/match_points" "imu_propagate"
pose_graph

feature_tracker

overview

feature_tracker接收图片,发布检测到的特征点,标出特征点的图片(用于可视化)。

特征点的结构如下:

1
2
3
4
5
6
7
8
9
10
# ros message, 用于存储三维点

# 数据获取的时间, 坐标系的ID(str)
Header header

# 三维点的array,二维点将最后一维写为1,vins中存的是去畸变的点的x,y
geometry_msgs/Point32[] points

# 点的额外信息。vins中加了点的id,未去畸变的u,v,光流法得到的像素速度
ChannelFloat32[] channels

feature_tracker文件夹下的内容:

1
2
3
4
5
6
7
feature_tracker
|- src
|- feature_tracker.cpp
|- feature_tracker.h
|- feature_tracker_node.cpp
|- parameters.cpp
|- parameters.h

主要三个源程序,feature_tracker_node(.cpp/.h)是特征跟踪线程的系统入口,feature_tracker(.cpp/.h)是特征跟踪算法的具体实现,parameters(.cpp/.h)是参数的读取和存放

细节

feature_tracker_node.cpp: - int main() 函数为程序入口 - void img_callback()为ROS的回调函数,对每一帧图像消息进行特征点追踪,处理和发布。

我们从main()函数开始看:主要包括初始化;读入参数;订阅和发布。核心在于订阅原始图片信息,通过img_callback跟踪和检测特征点,再发布特征点.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
int main(int argc, char **argv)
{
ros::init(argc, argv, "feature_tracker"); // 初始化节点,节点名称为feature_tracker
ros::NodeHandle n("~"); // 创建节点句柄
ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Info);

// 读入参数
readParameters(n); // --1 这个函数定义在parameters.cpp中
// 读入内参
for (int i = 0; i < NUM_OF_CAM; i++)
trackerData[i].readIntrinsicParameter(CAM_NAMES[i]);

// --2 如果是鱼眼相机,就读入MASK
if(FISHEYE)
{
for (int i = 0; i < NUM_OF_CAM; i++) // NUM_OF_CAM是常量1
{
trackerData[i].fisheye_mask = cv::imread(FISHEYE_MASK, 0);
if(!trackerData[i].fisheye_mask.data)
{
ROS_INFO("load mask fail");
ROS_BREAK();
}
else
ROS_INFO("load mask success");
}
}

// 订阅原始图片信息,img_callback对图片进行处理
ros::Subscriber sub_img = n.subscribe(IMAGE_TOPIC, 100, img_callback);

// 发布检测跟踪的特征
pub_img = n.advertise<sensor_msgs::PointCloud>("feature", 1000);

// 发布显示特征点的图片,用于可视化
pub_match = n.advertise<sensor_msgs::Image>("feature_img",1000);
pub_restart = n.advertise<std_msgs::Bool>("restart",1000);
/*
if (SHOW_TRACK)
cv::namedWindow("vis", cv::WINDOW_NORMAL);
*/
ros::spin();
return 0;
}

那么我们进一步来看img_callback函数是怎么做的:核心是对每个trackerData(feature_tracker类)调用了readImage函数,对图片进行处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
void img_callback(const sensor_msgs::ImageConstPtr &img_msg)
{
if(first_image_flag)
{
first_image_flag = false;
first_image_time = img_msg->header.stamp.toSec();
last_image_time = img_msg->header.stamp.toSec();
return;
}

// 如果当前图片消息与前一帧图片消息的时间差不合理,就重置feature_tracker
if (img_msg->header.stamp.toSec() - last_image_time > 1.0 || img_msg->header.stamp.toSec() < last_image_time)
{
ROS_WARN("image discontinue! reset the feature tracker!");
first_image_flag = true;
last_image_time = 0;
pub_count = 1;
std_msgs::Bool restart_flag;
restart_flag.data = true;
pub_restart.publish(restart_flag);
return;
}
last_image_time = img_msg->header.stamp.toSec();

// 控制发布的帧频率和图片采集频率相近
if (round(1.0 * pub_count / (img_msg->header.stamp.toSec() - first_image_time)) <= FREQ) // FREQ是params中读入的图片频率
{
PUB_THIS_FRAME = true;
// 如果前面的频率和采集频率很接近,说明前面的频率没问题。可以从当前帧重新开始计算
if (abs(1.0 * pub_count / (img_msg->header.stamp.toSec() - first_image_time) - FREQ) < 0.01 * FREQ)
{
first_image_time = img_msg->header.stamp.toSec();
pub_count = 0;
}
}
else
PUB_THIS_FRAME = false;

// 把ROS消息中的Image转换为opencv的img
cv_bridge::CvImageConstPtr ptr;
if (img_msg->encoding == "8UC1")
{
sensor_msgs::Image img;
img.header = img_msg->header;
img.height = img_msg->height;
img.width = img_msg->width;
img.is_bigendian = img_msg->is_bigendian;
img.step = img_msg->step;
img.data = img_msg->data;
img.encoding = "mono8";
ptr = cv_bridge::toCvCopy(img, sensor_msgs::image_encodings::MONO8);
}
else
ptr = cv_bridge::toCvCopy(img_msg, sensor_msgs::image_encodings::MONO8);

cv::Mat show_img = ptr->image;
TicToc t_r;
for (int i = 0; i < NUM_OF_CAM; i++)
{
ROS_DEBUG("processing camera %d", i);
if (i != 1 || !STEREO_TRACK)
trackerData[i].readImage(ptr->image.rowRange(ROW * i, ROW * (i + 1)), img_msg->header.stamp.toSec()); // 关键步骤:对图片进行特征跟踪和提取
else
{
if (EQUALIZE)
{
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
clahe->apply(ptr->image.rowRange(ROW * i, ROW * (i + 1)), trackerData[i].cur_img);
}
else
trackerData[i].cur_img = ptr->image.rowRange(ROW * i, ROW * (i + 1));
}

#if SHOW_UNDISTORTION
trackerData[i].showUndistortion("undistrotion_" + std::to_string(i));
#endif
}

for (unsigned int i = 0;; i++)
{
bool completed = false;
for (int j = 0; j < NUM_OF_CAM; j++)
if (j != 1 || !STEREO_TRACK)
completed |= trackerData[j].updateID(i);
if (!completed)
break;
}

// 将特征点ID,位置,速度,封装成feature_point信息发布
if (PUB_THIS_FRAME)
{
pub_count++;
sensor_msgs::PointCloudPtr feature_points(new sensor_msgs::PointCloud);
sensor_msgs::ChannelFloat32 id_of_point;
sensor_msgs::ChannelFloat32 u_of_point;
sensor_msgs::ChannelFloat32 v_of_point;
sensor_msgs::ChannelFloat32 velocity_x_of_point;
sensor_msgs::ChannelFloat32 velocity_y_of_point;

feature_points->header = img_msg->header;
feature_points->header.frame_id = "world";

vector<set<int>> hash_ids(NUM_OF_CAM);
for (int i = 0; i < NUM_OF_CAM; i++)
{
auto &un_pts = trackerData[i].cur_un_pts;
auto &cur_pts = trackerData[i].cur_pts;
auto &ids = trackerData[i].ids;
auto &pts_velocity = trackerData[i].pts_velocity;
for (unsigned int j = 0; j < ids.size(); j++)
{
if (trackerData[i].track_cnt[j] > 1)
{
int p_id = ids[j];
hash_ids[i].insert(p_id);
geometry_msgs::Point32 p;
p.x = un_pts[j].x;
p.y = un_pts[j].y;
p.z = 1;

feature_points->points.push_back(p);
id_of_point.values.push_back(p_id * NUM_OF_CAM + i);
u_of_point.values.push_back(cur_pts[j].x);
v_of_point.values.push_back(cur_pts[j].y);
velocity_x_of_point.values.push_back(pts_velocity[j].x);
velocity_y_of_point.values.push_back(pts_velocity[j].y);
}
}
}
feature_points->channels.push_back(id_of_point);
feature_points->channels.push_back(u_of_point);
feature_points->channels.push_back(v_of_point);
feature_points->channels.push_back(velocity_x_of_point);
feature_points->channels.push_back(velocity_y_of_point);
ROS_DEBUG("publish %f, at %f", feature_points->header.stamp.toSec(), ros::Time::now().toSec());
// skip the first image; since no optical speed on frist image
if (!init_pub)
{
init_pub = 1;
}
else
pub_img.publish(feature_points);

if (SHOW_TRACK)
{
ptr = cv_bridge::cvtColor(ptr, sensor_msgs::image_encodings::BGR8);
//cv::Mat stereo_img(ROW * NUM_OF_CAM, COL, CV_8UC3);
cv::Mat stereo_img = ptr->image;

for (int i = 0; i < NUM_OF_CAM; i++)
{
cv::Mat tmp_img = stereo_img.rowRange(i * ROW, (i + 1) * ROW);
cv::cvtColor(show_img, tmp_img, CV_GRAY2RGB);

for (unsigned int j = 0; j < trackerData[i].cur_pts.size(); j++)
{
double len = std::min(1.0, 1.0 * trackerData[i].track_cnt[j] / WINDOW_SIZE);
cv::circle(tmp_img, trackerData[i].cur_pts[j], 2, cv::Scalar(255 * (1 - len), 0, 255 * len), 2);
//draw speed line
/*
Vector2d tmp_cur_un_pts (trackerData[i].cur_un_pts[j].x, trackerData[i].cur_un_pts[j].y);
Vector2d tmp_pts_velocity (trackerData[i].pts_velocity[j].x, trackerData[i].pts_velocity[j].y);
Vector3d tmp_prev_un_pts;
tmp_prev_un_pts.head(2) = tmp_cur_un_pts - 0.10 * tmp_pts_velocity;
tmp_prev_un_pts.z() = 1;
Vector2d tmp_prev_uv;
trackerData[i].m_camera->spaceToPlane(tmp_prev_un_pts, tmp_prev_uv);
cv::line(tmp_img, trackerData[i].cur_pts[j], cv::Point2f(tmp_prev_uv.x(), tmp_prev_uv.y()), cv::Scalar(255 , 0, 0), 1 , 8, 0);
*/
//char name[10];
//sprintf(name, "%d", trackerData[i].ids[j]);
//cv::putText(tmp_img, name, trackerData[i].cur_pts[j], cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
}
}
//cv::imshow("vis", stereo_img);
//cv::waitKey(5);
pub_match.publish(ptr->toImageMsg());
}
}
ROS_INFO("whole feature tracker processing costs: %f", t_r.toc());
}
readImage

输入的是图片(cv::Mat),和当前图片的时间戳

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
void FeatureTracker::readImage(const cv::Mat &_img, double _cur_time)
{
cv::Mat img;
TicToc t_r;
cur_time = _cur_time;

// 是否做直方图均衡化。该参数用于在图片本身过亮或过暗时,用直方图均衡化进行调整,提高图片对比度
if (EQUALIZE)
{
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
TicToc t_c;
clahe->apply(_img, img);
ROS_DEBUG("CLAHE costs: %fms", t_c.toc());
}
else
img = _img;

if (forw_img.empty())
{
prev_img = cur_img = forw_img = img;
}
else
{
forw_img = img;
}

// forw_pts 下一帧跟踪的特征点
forw_pts.clear();
// cur_pts 当前跟踪的特征点
if (cur_pts.size() > 0)
{
TicToc t_o;
vector<uchar> status;
vector<float> err;
cv::calcOpticalFlowPyrLK(cur_img, forw_img, cur_pts, forw_pts, status, err, cv::Size(21, 21), 3); // --1 LK光流进行特征跟踪

// 如果一个特征位置变到了图像外面,就将其状态设置为无效
for (int i = 0; i < int(forw_pts.size()); i++)
if (status[i] && !inBorder(forw_pts[i]))
status[i] = 0;

// 把status[i] = 0的特征点删除
reduceVector(prev_pts, status);
reduceVector(cur_pts, status);
reduceVector(forw_pts, status);
reduceVector(ids, status);
reduceVector(cur_un_pts, status);
reduceVector(track_cnt, status);
ROS_DEBUG("temporal optical flow costs: %fms", t_o.toc());
}

for (auto &n : track_cnt)
n++;

if (PUB_THIS_FRAME)
{
rejectWithF(); // 用F矩阵剔除外点
ROS_DEBUG("set mask begins");
TicToc t_m;
setMask(); // 对跟踪点排序,取出现次数多的位置,去除密集区域
ROS_DEBUG("set mask costs %fms", t_m.toc());

ROS_DEBUG("detect feature begins");
TicToc t_t;
int n_max_cnt = MAX_CNT - static_cast<int>(forw_pts.size());
if (n_max_cnt > 0)
{
if(mask.empty())
cout << "mask is empty " << endl;
if (mask.type() != CV_8UC1)
cout << "mask type wrong " << endl;
if (mask.size() != forw_img.size())
cout << "wrong size " << endl;
cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.01, MIN_DIST, mask); // 当特征点数不足时,重新检测角点补足
}
else
n_pts.clear();
ROS_DEBUG("detect feature costs: %fms", t_t.toc());

ROS_DEBUG("add feature begins");
TicToc t_a;
addPoints(); // 把新检测的特征点加入forw_pts
ROS_DEBUG("selectFeature costs: %fms", t_a.toc());
}
prev_img = cur_img;
prev_pts = cur_pts;
prev_un_pts = cur_un_pts;
cur_img = forw_img;
cur_pts = forw_pts; // 把检测到的特征点都赋值给cur_pts
undistortedPoints(); // 去畸变
prev_time = cur_time;
}
重点解析
  1. LK光流跟踪:

    1
    cv::calcOpticalFlowPyrLK(cur_img, forw_img, cur_pts, forw_pts, status, err, cv::Size(21, 21), 3);

    • cur_img(cv::Mat): 输入前一帧图片
    • forw_img(cv::Mat): 输入需要跟踪特征的一帧图片
    • cur_pts(vector<cv::Point2f>): 输入前一帧图片上的特征点
    • forw_pts(vector<cv::Point2f>): 输出当前一帧图片上的特征点
    • status(unsigned char): 输出状态向量,如果在当前图像中能够光流得到标定的特征点位置改变,则设置status的对应位置为1,否则设置为0
  2. 通过基本矩阵剔除外点 rejectWithF()

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    void FeatureTracker::rejectWithF()
    {
    if (forw_pts.size() >= 8)// 当前帧(追踪上)特征点数量足够多
    {
    ROS_DEBUG("FM ransac begins");
    TicToc t_f;
    vector<cv::Point2f> un_cur_pts(cur_pts.size()), un_forw_pts(forw_pts.size());
    for (unsigned int i = 0; i < cur_pts.size(); i++)//遍历上一帧所有特征点
    {

    Eigen::Vector3d tmp_p;
    //将当前帧的特征点和上一帧的特征点用反投影去除畸变
    m_camera->liftProjective(Eigen::Vector2d(cur_pts[i].x, cur_pts[i].y), tmp_p);
    tmp_p.x() = FOCAL_LENGTH * tmp_p.x() / tmp_p.z() + COL / 2.0;
    tmp_p.y() = FOCAL_LENGTH * tmp_p.y() / tmp_p.z() + ROW / 2.0;
    un_cur_pts[i] = cv::Point2f(tmp_p.x(), tmp_p.y());

    m_camera->liftProjective(Eigen::Vector2d(forw_pts[i].x, forw_pts[i].y), tmp_p);
    tmp_p.x() = FOCAL_LENGTH * tmp_p.x() / tmp_p.z() + COL / 2.0;
    tmp_p.y() = FOCAL_LENGTH * tmp_p.y() / tmp_p.z() + ROW / 2.0;
    un_forw_pts[i] = cv::Point2f(tmp_p.x(), tmp_p.y());
    }

    vector<uchar> status;
    // 2. 调用cv::findFundamentalMat对un_cur_pts和un_forw_pts计算F矩阵,需要归一化相机系,z=1
    cv::findFundamentalMat(un_cur_pts, un_forw_pts, cv::FM_RANSAC, F_THRESHOLD, 0.99, status);
    int size_a = cur_pts.size();
    // 3. 根据status删除一些特征点
    reduceVector(prev_pts, status);
    reduceVector(cur_pts, status);
    reduceVector(forw_pts, status);
    reduceVector(cur_un_pts, status);
    reduceVector(ids, status);
    reduceVector(track_cnt, status);
    ROS_DEBUG("FM ransac: %d -> %lu: %f", size_a, forw_pts.size(), 1.0 * forw_pts.size() / size_a);
    ROS_DEBUG("FM ransac costs: %fms", t_f.toc());
    }
    }

    cv::findFundamentalMat 从两幅图像的多个对应点计算基本矩阵。可以输出一个mask (仅在 RANSAC 和 LMedS 方法中计算),维度和点个数一样,表示点根据估计的F是否异常。

  3. 对跟踪点排序,选择出现次数多的点,去除密集点 setMask()

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    void FeatureTracker::setMask()
    {
    // 如果是鱼眼镜头直接clone即可,否则创建空白mask
    if(FISHEYE)
    mask = fisheye_mask.clone();
    else
    mask = cv::Mat(ROW, COL, CV_8UC1, cv::Scalar(255));// 空白mask都是255

    // 倾向于留下被追踪时间很长的特征点
    // 构造(cnt,pts,id)序列,(追踪次数,当前特征点坐标,id)
    vector<pair<int, pair<cv::Point2f, int>>> cnt_pts_id;

    for (unsigned int i = 0; i < forw_pts.size(); i++)
    cnt_pts_id.push_back(make_pair(track_cnt[i], make_pair(forw_pts[i], ids[i])));

    // 对光流跟踪到的特征点forw_pts,按照被跟踪到的次数cnt从大到小排序(lambda表达式)
    sort(cnt_pts_id.begin(), cnt_pts_id.end(), [](const pair<int, pair<cv::Point2f, int>> &a, const pair<int, pair<cv::Point2f, int>> &b)
    {
    return a.first > b.first;// a.first指的追踪次数track_cnt
    });

    //清空cnt,pts,id并重新存入
    forw_pts.clear();
    ids.clear();
    track_cnt.clear();

    for (auto &it : cnt_pts_id)
    {
    if (mask.at<uchar>(it.second.first) == 255)// 这个特征点对应的mask值为255,表明该位置可以作为特征点
    {
    //将对应的特征点位置pts,id,被追踪次数cnt分别存入
    forw_pts.push_back(it.second.first);
    ids.push_back(it.second.second);
    track_cnt.push_back(it.first);

    //在mask中将当前特征点周围半径为MIN_DIST的区域设置为0,后面不再选取该区域内的点(使跟踪点不集中在一个区域上)
    cv::circle(mask, it.second.first, MIN_DIST, 0, -1);
    }
    }
    }

  4. 角点检测

    1
    void cv::goodFeaturesToTrack(forw_img, n_pts, n_max_cnt, 0.01, MIN_DIST, mask);

    • forw_img(cv::InputArray image): 输入图像
    • n_pts(vector): 输出角点
    • n_max_cnt(int): 检测角点数量
    • 0.01: 质量水平系数
    • MIN_DIST: 特征点之间的最小距离
    • mask: mask的点忽略,因此不会再找到已经跟踪的点
  5. 去畸变undistortedPoints()

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    void FeatureTracker::undistortedPoints()
    {
    cur_un_pts.clear();
    cur_un_pts_map.clear();
    //cv::undistortPoints(cur_pts, un_pts, K, cv::Mat());
    for (unsigned int i = 0; i < cur_pts.size(); i++)
    {
    Eigen::Vector2d a(cur_pts[i].x, cur_pts[i].y);
    Eigen::Vector3d b;
    m_camera->liftProjective(a, b); // 把点从图形平面去除畸变移到归一化平面
    cur_un_pts.push_back(cv::Point2f(b.x() / b.z(), b.y() / b.z()));
    cur_un_pts_map.insert(make_pair(ids[i], cv::Point2f(b.x() / b.z(), b.y() / b.z())));
    //printf("cur pts id %d %f %f", ids[i], cur_un_pts[i].x, cur_un_pts[i].y);
    }
    // pts_velocity 计算每个像素点的速度
    if (!prev_un_pts_map.empty())
    {
    double dt = cur_time - prev_time;
    pts_velocity.clear();
    for (unsigned int i = 0; i < cur_un_pts.size(); i++)
    {
    if (ids[i] != -1)
    {
    std::map<int, cv::Point2f>::iterator it;
    it = prev_un_pts_map.find(ids[i]);
    if (it != prev_un_pts_map.end())
    {
    double v_x = (cur_un_pts[i].x - it->second.x) / dt;
    double v_y = (cur_un_pts[i].y - it->second.y) / dt;
    pts_velocity.push_back(cv::Point2f(v_x, v_y));
    }
    else
    pts_velocity.push_back(cv::Point2f(0, 0));
    }
    else
    {
    pts_velocity.push_back(cv::Point2f(0, 0));
    }
    }
    }
    else
    {
    for (unsigned int i = 0; i < cur_pts.size(); i++)
    {
    pts_velocity.push_back(cv::Point2f(0, 0));
    }
    }
    prev_un_pts_map = cur_un_pts_map;
    }

    listProjective() 把像素坐标转换为无畸变的归一化坐标:

    1
    2
    3
    Eigen::Vector2d a(cur_pts[i].x, cur_pts[i].y);
    Eigen::Vector3d b;
    m_camera->liftProjective(a, b);
    参考 https://blog.csdn.net/hltt3838/article/details/119428558

    相当于从图形坐标系,用内参和畸变系数转换到归一化相机投影平面上,由于畸变的计算公式,这个去畸变是一个不断迭代的过程。opencv中undistPoints()也是实现同样功能。

  6. 发布信息

输出三个topic

feature_img: 用于rviz显示跟踪到的特征点

feature: 跟踪到的特征信息

restart: 跟踪失败的复位信号


Vins-mono论文与代码阅读(一)overview+feature_tracker
https://sisyphus-99.github.io/2023/09/29/vinsmono论文与代码阅读1/
Author
sisyphus
Posted on
September 29, 2023
Licensed under