Citation: Trejos, K.; Rincón, L.; Bolaños, M.; Fallas, J.; and Marín, L. Simulated 2D SLAM Algorithms Calibration and Comparison Considering Pose Error, Map Accuracy and CPU and Memory Usage. Journal Not Specified 2022, 1, 0. https://doi.org/ Received: Accepted: Published: Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil- iations. Copyright: © 2022 by the authors. Submitted to Journal Not Specified for possible open access publication under the terms and conditions of the Creative Commons Attri- bution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). Article 2D SLAM Algorithms Characterization, Calibration, and Comparison Considering Pose Error, Map Accuracy, CPU Usage, and Memory Usage Kevin Trejos 1, Laura Rincón 1, Miguel Bolaños 1, José Fallas 1, and Leonardo Marín 1 1 Control Engineering Research Laboratory (CERLab), Electrical Engineering School, Engineering Faculty, University of Costa Rica (UCR). San Pedro, San José, Costa Rica, 11501-2060 UCR. * Correspondence: leonardo.marin@ucr.ac.cr +506 2511 2643, kevin.trejosvargas@ucr.ac.cr; Tel.: +506 8584 4876 † The characterization,calibration, and comparison algorithms and metrics were developed by Kevin Trejos, under the supervision of Leonardo Marin. Nevertheless, the algorithms test execution and comparison were distributed equally. Abstract: The present work proposes a method to characterize, calibrate, and compare, any 2D 1 SLAM algorithm, providing strong statistical evidence, based on descriptive and inferential statistics 2 to bring confidence levels about overall behavior of the algorithms and their comparisons. This 3 work focuses on characterize, calibrate, and compare Cartographer, Gmapping, HECTOR-SLAM, 4 KARTO-SLAM, and RTAB-Map SLAM algorithms, there were four metrics in place, these are pose 5 error, map accuracy, CPU usage, and memory usage, from these four metrics, to characterize them, 6 Plackett-Burman and factorial experiments were performed, and enhancement after characterization 7 and calibration was granted by using hypothesis tests besides central limit theorem. 8 Keywords: 2D SLAM; SLAM calibration; ROS; GAZEBO; Cartographer; Gmapping; HECTOR-SLAM; 9 KARTO-SLAM; RTAB-Map; APE; Knn-Search; Plackett-Burman. 10 1. Introduction 11 SLAM algorithms are complex methods that allow a robot, without any external sys- 12 tem rather than its own sensors, to create a map of the environment and locate itself into 13 this map. There are a large amount of non-linearities and imperfections in the mobile robot 14 system (e.g. robot drifts, sensor noise, irregular environment) that could lead the SLAM 15 algorithms to a bad representation of the environment, getting lost on this representation, 16 or spending a considerable amount of computational resources [1,2]. Therefore, since these 17 are the main difficulties a robot with a SLAM algorithm must overcome, this work focuses 18 on characterizing, calibrating, and comparing five different 2D SLAM algorithms towards 19 creating a good map, having a good track of its pose (position and orientation), but also 20 spending the less possible CPU and memory while doing so. 21 22 For longer than two decades, SLAM has been in the spotlight of many robotics re- 23 searchers, due its many possible applications such as autonomous driving [3,4], search and 24 rescue [5], autonomous underwater vehicles [6,7], collaborative robotics [8], etc., which is 25 why, nowadays, there are many different approaches trying to solve the same problem [9]. 26 Below are shown the most frequent SLAM algorithms approaches. 27 28 A first approach to solve the SLAM problem was based on the extended Kalman filters 29 (EKF) [1]. Kalman filters [10] are based in the implementation of observers, which are 30 mathematical models of the linearized system that help estimate the behavior of the real 31 system, and in the utilization of an optimal state estimator, that considers white noise in 32 the measurements of the system [11]. For the SLAM problem the EKF first predict the 33 robot state (pose and a map represented by a series of landmarks or features) [1,12] using a 34 Version May 2, 2022 submitted to Journal Not Specified https://www.mdpi.com/journal/notspecified https://doi.org/10.3390/1010000 https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/ https://www.mdpi.com https://orcid.org/0000-0002-8986-642X https://www.mdpi.com/journal/notspecified Version May 2, 2022 submitted to Journal Not Specified 2 of 37 mathematical model of the robot movement and the environment, and then uses the sensor 35 data to correct the prediction. The sensor normally used in this approach is a LiDAR, but 36 there are solutions using sonars or monocular cameras [3]. 37 38 Another strategy in the SLAM solution was made by using particle filters. This is 39 a modern approach, but its conceptualization is not since its origins are approximately 40 around 1949 with the Monte Carlo method [13]. The main methodology difference towards 41 Kalman filters is the data distribution type that this method can deal with. Kalman filters 42 are intended to deal with linear Gaussian distributions [10,14], while particle filters can deal 43 with arbitrary non-Gaussian distributions and non-linear process models [1]. Particle filters 44 in SLAM use a set of particles, each being a concrete guess of the robot state (pose and 45 map) [9]. As the robot moves into the environment and uses information from the sensors, 46 the filter removes erroneous particles (with low probability of occurrence) and adds new 47 particles close to those with the best probability of occurrence [15]. After a certain time, the 48 erroneous particles will have been eliminated while the correct ones will be similar between 49 them (similar pose and map estimates) [16]. The sensor normally used in this approach is a 50 LiDAR [9]. The algorithms Gmapping [17] and HECTOR-SLAM [18] are modern examples 51 of the SLAM particle filter solution. 52 53 A more recent approach considers using graph-based methodologies. This proposes 54 to use a graph [3] whose nodes correspond to the robot’s poses at different points in time 55 and whose edges represent restrictions between the poses. The graph is obtained from 56 observations of the environment or movement actions conducted by the robot. When this 57 graph is assembled, the map can be calculated by finding the spatial configuration of the 58 nodes that is most consistent with the measurements modeled by the edges [19,20], this 59 solution is usually obtained with standard optimization methods (e.g. Gauss-Newton, 60 Levenberg-Marquardt, etc.) [19] or with nonlinear sparse optimization [9]. The sensor 61 normally used in this approach is a LiDAR, but there are solutions using some time-of- 62 flight cameras (also known as RGB-Dept cameras) such as the Microsoft’s Kinect [9]. The 63 algorithms Cartographer [21], KARTO-SLAM [22] and the original RTAB-Map [23] are 64 modern examples of the graph-based SLAM solution. 65 66 There are also modern methods that can be used for 3D SLAM, which can use different 67 sensor types, such as Visual SLAM (vSLAM) that use low-cost cameras (e.g. monocular, 68 stereo, and RGB-Dept cameras) to capture the environment data as the robot navigates, 69 and then extract the relevant information to solve the SLAM problem using the EKF, the 70 particle filter or the graph-based approach [24]. For example, the latest version of the 71 RTAB-Map SLAM algorithm also supports visual slam [23]. There is also the Visual-inertial 72 simultaneous localization and mapping (VI-SLAM) algorithm that fuses the information 73 obtained from the camera with the data obtained from an Inertial Measurement Unit (IMU), 74 such as the orientation and the change in the pose, to improve the accuracy of the SLAM 75 solution, that is obtained using a filter or an optimization approach [25,26]. Additionally, 76 direct 3D Slam methods exists, that use more modern 3D LiDAR systems, which are applied 77 to improve the SLAM algorithm performance in challenging environments (e.g. smoke 78 in the surroundings, fog or rainy situations, etc.) [27,28]. Finally, there are methods that 79 combine the vision and LiDAR approaches in order to improve the SLAM performance 80 in cases of aggressive motion, lack of light, or lack of visual features. These algorithms 81 employ 2D or 3D LiDAR sensors and the EKF or the graph-based methodologies to obtain 82 the SLAM solution [27]. 83 84 In this paper we focus on the comparison of 2D Slam algorithms with similar pose 85 and map representation. These are based on the previously described SLAM solution 86 approaches, but with different capabilities and strategies to obtain the best possible map 87 and pose adjustment, or even better resources usage optimization. These capabilities are 88 Version May 2, 2022 submitted to Journal Not Specified 3 of 37 important when dealing with different environments, such as robots with limited resources, 89 which might require an algorithm with the highest resources usage optimization possible, 90 while cases with robots dealing with complex environments might better select an algo- 91 rithm that has deeply optimized the pose and map calculations. In the subsequent sections 92 the selected algorithms will be described with deeper emphasis. 93 94 In this work, four metrics are used for the comparison of 2D Slam algorithms, they 95 were created and processed in MATLAB, and are explained in the following paragraphs. 96 97 The map accuracy was measured using k-nearest neighbor method [29], by measuring 98 the euclidean distance from each of the ground truth points to the nearest map point gener- 99 ated by the SLAM algorithm under test. A mathematical representation of the metric can 100 be found in equation (1), where N is the amount of points to sample, x2 − x1 and y2 − y1 101 represent the x-coordinate and y-coordinate difference between the ground truth point and 102 the nearest map point generated by the algorithm, respectively. The measurement units 103 used for this metric are centimeters. 104 105 Pose tracking accuracy was developed by a set of iterative loops calculating the eu- 106 clidean distance between the ground-truth pose and the estimated pose [30]. It can also be 107 represented by equation (1), but with a modified interpretation of the variables. For this 108 metric N is the number of poses to sample, x2 − x1 and y2 − y1 represent the x-coordinate 109 and y-coordinate difference between the ground truth pose and the estimated pose gener- 110 ated by the algorithm, respectively. The measurement units used for this metric are meters. 111 112 dE = 1 N N ∑ i=1 √ (x2 − x1)2 + (y2 − y1)2 (1) Finally, CPU and memory usage were recorded using Python psutil library [31]. These 113 both metrics are mathematically represented by averaging the whole measurements taken 114 during the test run, and their units are percentage of for CPU usage, where a number 115 beyond 100% means it is using more than a single core, and MB for memory usage. 116 117 Lastly, there are many SLAM comparison investigations done previously, such as [32], 118 which focuses on the algorithms processing time; [33] which evaluates map accuracy, and 119 CPU usage; [20] which evaluates map accuracy, CPU and memory usage; [34] which only 120 measures pose and map accuracy, and [35] which analyzes map accuracy and CPU usage. 121 122 Based on the reviewed works, there are two differentiating factors of the method 123 proposed in this paper, which puts our investigation a step ahead: 124 1. The existing works focus only on map accuracy, pose accuracy, memory or CPU usage, 125 but none of them considers all of them together. Our investigation considers all of 126 them, giving a wider point of view to better characterize, calibrate, and compare the 127 SLAM algorithms. 128 2. None of the current methods takes a statistical approach to provide confidence levels 129 on the results obtained. With our investigation we can guarantee with 90% confidence 130 that each condition will happen when the populations are considered. And with 95% 131 confidence level that the characterization and calibration of the parameters is the best 132 fit for the ranges tested. 133 2. Materials and Methods 134 2.1. Generalities 135 For all these experiments, since the trials and algorithms were simulated, the only 136 equipment needed was a computer running Ubuntu 18 with ROS Melodic, the computer 137 Version May 2, 2022 submitted to Journal Not Specified 4 of 37 was a server with an Intel Xeon Silver 4114 2.2 GHz. To simulate the environment, a software 138 called GAZEBO 11.0.0 release was used to simulate the test environment, while a robot 139 named TurtleBot 3 Burger was the one selected to be simulated in this work, because of its 140 2D LiDAR sensor and its differential driving mode, but other configurations can be used, 141 such as a mecanum omnidirectional robot [36]. 142 2.2. Simulation needs 143 Regarding the ROS nodes, there are some nodes that were tailored for our needs, other 144 than simulated robot that can be easily implemented based on the TurtleBot 3 wiki [37]. The 145 first of them is the so-called Robot Pose Publisher, which basically reads the data published 146 by GAZEBO and stores every convenient time (20 times per second in this case) the actual 147 pose of the robot [38], second, a node that monitors the CPU and memory usage by the 148 SLAM algorithm [39], and last but not least, a node that makes the robot follow a fixed 149 path, to guarantee that all the samples were performed under the same conditions [40]. 150 2.3. Data processing needs 151 Next, MATLAB 2020B was used to convert the data provided through rosbags in a 152 manner that can be easily analyzed and synthesized, the scripts used are Ground Truth 153 Generator, which takes the environment created through GAZEBO and builds a high 154 resolution 2D version of it [41]. This well-known Ground truth plot is then compared to 155 the SLAM algorithm result by using a script that takes advantage of knn-search method 156 provided by MATLAB [42], its output is the descriptive statistics of the whole comparison. 157 There are two other important scripts, in the first it is compared the real pose towards 158 the estimated pose of the robot, and returns some meaningful descriptive statistics about the 159 comparison [43], and a script that analyzes the CPU and memory usage by the algorithm 160 [44]. 161 2.4. Data analysis needs 162 The data analysis software used to provide sufficient statistical evidence of the results 163 provided, was Minitab statistical tool version 2018. 164 2.5. SLAM Algorithms used 165 There are five algorithms used, all of them as a 2D algorithm because of the robot 166 sensor limitation wanted, these are described in the following subsections. 167 2.5.1. Cartographer 168 Cartographer was created by Google and released for free worldwide access since Oc- 169 tober 2016 [21]. The main idea with this algorithm was to improve the efficiency, by 170 optimizing the way to process the data from particle filters. So, instead of creating a big 171 map, it divides them by shorter sub-maps, which then are inserted on the way, besides a 172 pose optimization, concluding in an error reduction that is carried over from robot pose [45]. 173 174 This algorithm is based in the combination of two separated 2D SLAM, one of them 175 working locally, and the other working globally, both using a LiDAR sensor and optimized 176 independently. Local SLAM is based in the collection and creation of sub-maps, one of 177 them is the recollection and alignment of multiple scans with respect to initial position. 178 Sub-maps are created like a dot net with an specific resolution, and with a probability 179 associated that one of its dots is blocked. This probability depends if it was measured 180 previously and if it is kept while more sub-maps are created. Once sub-map is created, it is 181 passed by an algorithm to find the optimal position to match with the rest of the sub-maps, 182 and then extrapolate the rest of them [46]. 183 184 The second part of the algorithm, the global SLAM, is focused in the sub-maps feed- 185 back. Once these sub-maps are created, all of them have robot poses associated. which are 186 Version May 2, 2022 submitted to Journal Not Specified 5 of 37 used to improve the maps, making a reduction of the accumulated SLAM error. This is 187 well-known as loop closure [46]. 188 189 By using the well-known optimization called Spare Pose Adjusment (SPA), every time a 190 sub-map is generated, a map-scanner is executed to close the loop and insert the just-created 191 sub-map into the graphic. Below are shown two formulas that determine if a cell is saved 192 as busy, empty, or empty into a map cell [47]. 193 194 Mnew(cell) = P−1(P(Mlast(cell)Ṗ(phit))) Where: 195 • Mlast(cell) is the error likelyhood. 196 • phit is the probability that a map cell is busy. 197 • P = P 1−P 198 The intention is to minimize the functional cost of updating the cells value that com- pose the map. arg min ξ K ∑ k=1 (1 − Mso f tened(Tξ hk)) 2 (2) Where: 199 200 • Mso f tened(x) is the cell value x, softened by the neighbor values. 201 • hk is the laser reading related to cell. 202 • Tξ is the matrix transformation that displaces the point hk to ξ. 203 • ξ is the posture vector (ξx, ξy, ξθ). 204 This model is configured based on different parameters of the algorithm. Below, in 205 the table 1 are shown the main parameters that have incidence in the functionality of the 206 algorithm [48]. 207 208 Table 1. List of Cartographer parameters. Parameter Description Range Default value local_slam_pose_translation_weight Weight for translation between consecutive nodes, based in the local SLAM 10e2 - 10e6 10e5 local_slam_pose_rotation_weight Weight for rotation between consecutive nodes, based in the local SLAM 10e2 - 10e6 10e5 odometry_translation_weight Weight for translation between consecutive nodes, based in the odometry 10e2 - 10e6 10e5 ceres_scan_matcher.translation_weight Weight to be applied to the translation, for next-submap joint 0.1 - 1.0 0.4 ceres_scan_matcher.rotation_weight Weight to be applied to the rotation, for next-submap joint 0.1 - 1.0 0.3 optimize_every_n_nodes Quantity of inserted nodes that will be used for loop closure optimization 40 - 120 90 global_sampling_ratio Sampling frequency for nodes trajectory 0.0001 - 0.0005 0.0003 submaps.resolution Map resolution in meters 0.0001 - 0.0005 0.0003 constraint_builder.min_score Minimum value for which will be considered that a match was found 0.4 - 0.8 0.6 2.5.2. Gmapping 209 This algorithm is based in the principles described in the particle filter with Rao- 210 Blackwellization, which makes the math to get the actual posture of the robot, right from the 211 probability given by the information collected in the past; with the help of this posture and 212 the past maps made. It also has the capability of correcting estimations by the odometry 213 and the calculation of the weights and the map [17]. 214 215 This is one of the most studied types of SLAM algorithms, it came right after many 216 years of investigation around particle filters, using the Rao-Blackwellized particle filter ap- 217 proach [49] to solve more efficiently the SLAM algorithm, reducing the number of particles 218 required for the estimation [49]. Also, the robot pose uncertainty is greatly decreased in 219 this algorithm. However, it has a higher computational resource requirement, as it usually 220 Version May 2, 2022 submitted to Journal Not Specified 6 of 37 has an elevated processing time and memory consumption when compared to the EKF 221 filter approach. 222 223 The main parameters responsible of the functionality of the algorithm are listed in the 224 table 2, according to [50]. 225 Table 2. List of Gmapping parameters. Parameter Description Range Default value minimumScore Minimum score for considering the outcome of the scan matching good 0 - 50 50 iterations The number of iterations of the scanmatcher 5 - 10 5 lsigma The sigma of a beam used for likelihood computation 0.075 - 1.500 0.075 ogain Gain to be used while evaluating the likelihood, for smoothing the resampling effects 3.0 - 10.0 3.0 resampleTreshold The Neff based resampling threshold 0.0 - 0.5 0.5 particles Number of particles in the filter 30 - 100 30 2.5.3. HECTOR-SLAM 226 This algorithm is named because of its development team, which is Heterogeneous 227 Cooperating Team Of Robots, an as it is explained in [18], it was developed because of the 228 necessity of an algorithm for Urban Search and Rescue scenarios (USAR). 229 230 HECTOR-SLAM was developed from a 2D SLAM using a LiDAR sensor that had 231 attached an IMU, this sensor provides the measurements for the navigation filter, and also 232 gives the capability to perform 3D mapping. This is the reason why HECTOR-SLAM can 233 be used into either 2D or 3D strategies. 234 235 As shown in [18], the algorithm uses an occupation grid map. Since LiDAR has 6 de- 236 grees of freedom, the scanned points must be transformed to a local coordinates framework 237 using the estimated behavior from the LiDAR. Reason why, using the estimated pose, the 238 scanned points are converted in a point cloud. With this point cloud, it is performed a 239 pre-processing of the data, HECTOR-SLAM uses a z axis filtering of the final point, with 240 this only the final points of the (x, y) plane are considered. 241 242 Regarding the list of parameters of HECTOR-SLAM, these are defined in the table 3, 243 they were taken from [51]. Table 3. List of HECTOR-SLAM parameters. Parameter Description Range Default value update_factor_free The map update modifier for updates of free cells in the range. A value of 0.5 means no changes 0.0 - 1.0 0.4 update_factor_ocuppied The map update modifier for updates of occupied cells in the range. A value of 0.5 means no changes 0.0 - 1.0 0.9 map_update_distance_thresh Threshold for performing map updates (value in meters) 0.01 - 2 0.4 map_update_angle_thresh Threshold for performing map updates (value in radians) 0.01 - 2 0.9 map_pub_period Map publish period (value in seconds) 1.00 - 5.00 2.00 244 2.5.4. KARTO-SLAM 245 KARTO-SLAM is an optimized SLAM algorithm, it was developed by SRI Interna- 246 tional’s Karto Robotics with a ROS extension, as an open source code. Its working base lies in 247 the decomposition of Cholesky matrices to minimize the error, giving an optimized robot 248 pose and trajectory [22]. 249 250 KARTO-SLAM builds the map by using nodes that save the location points of the 251 robot trajectory and the dataset of sensor measurements. Graph borders are represented by 252 transformations or trajectories between two consecutive poses in the space. when a new 253 node is added, the map will be reprocessed and updated according to the border restriction 254 in the space. These restrictions will be linearized as an scatter graph [52,53]. 255 256 Version May 2, 2022 submitted to Journal Not Specified 7 of 37 A loop closure condition can be shown if the robot revisits the same point twice or 257 more times in the same run. In other words, a border that connects two nodes with the same 258 world perception is made. Aligning these perceptions produces a virtual transformation. 259 Based on this information it is determined if the algorithm can adjust its estimations and 260 represents the environment with a good enough confidence level [54]. 261 262 An optimization is used to calculate the most likely pose from the nodes collected, to 263 get the most probable graph. To use the optimization methods, it is necessary to define an 264 error function between the measurements obtained. Assuming x = (x1, x2, ..., XT) T is the 265 nodes vector in the graph, and zi,j the odometry between nodes xi and xj. A border ˆzi, j is 266 produced, with an error expression that meets the equation (3). 267 268 ei,j(xi, xj) = ˆzi,j − zi,j (3) Together with the inverse covariation matrix ωi,j, an error function is established, 269 given by the equation (4). 270 271 F(x1,T) = ∑ ϵG ( ˆzi,j − zi,j) Tωi,j( ˆzi,j − zi,j) (4) The goal is to compute a posture x, in a way that the equation (4) goes to its minimum, 272 in a way that equation (5) is accomplished. 273 274 x1,T = argminxF(x) (5) At this point it is necessary to describe the algorithm parameters, these are shown in 275 the table 4 and were taken from [55]. 276 277 Table 4. List of KARTO-SLAM parameters. Parameter Description Range Default value scan_buffer_size Sets the length of the scan chain stored for scan matching 30 - 100 70 link_match_minimum_response_fine Scans are linked only if the correlation response value is greater than this value 0.06 - 0.18 0.12 loop_match_minimum_chain_size When the loop closure detection finds a candidate it must be part of a large set of linked scans. If the chain of scans is less than this value, it will not attempt to close the loop 5 - 15 10 loop_match_maximum_variance_coarse The co-variance values for a possible loop closure have to be less than this value to consider a viable solution. This applies to the coarse search 0.3 - 0.5 0.4 loop_match_minimum_response_coarse If response is larger than this, then initiate loop closure search at the coarse resolution 0.75 - 0.85 0.80 loop_match_minimum_response_fine If response is larger than this, then initiate loop closure search at the fine resolution 0.75 - 0.85 0.80 correlation_search_space_dimension Sets the size of the search grid used by the matcher 0.2 - 0.4 0.3 correlation_search_space_smear_deviation The point readings are smeared by this value in X and Y to create a smoother response 0.03 - 0.04 0.03 loop_search_space_dimension The size of the search grid used by the matcher 7.0 - 9.0 8.0 loop_search_space_smear_deviation The point readings are smeared by this value in X and Y to create a smoother response 0.03 - 0.04 0.03 distance_variance_penalty Variance of penalty for deviating from odometry when scan-matching 0.2 - 0.4 0.3 angle_variance_penalty Variance of penalty for deviating from odometry when scan-matching 0.249 - 0.449 0.349 fine_search_angle_offset The range of angles to search during a fine search 0.00249 - 0.00449 0.00349 coarse_search_angle_offset The range of angles to search during a coarse search 0.249 - 0.449 0.349 coarse_angle_resolution Resolution of angles to search during a coarse search 0.0249 - 0.0449 0.0349 minimum_angle_penalty Minimum value of the angle penalty multiplier so scores do not become too small 0.85 - 0.95 0.90 minimum_distance_penalty Minimum value of the distance penalty multiplier so scores do not become too small 0.4 - 0.6 0.5 use_response_expansion Whether to increase the search space if no good matches are initially found True/False False Version May 2, 2022 submitted to Journal Not Specified 8 of 37 2.5.5. RTAB-Map 278 RTAB-Map comes from Real-Time Appearance-Based Mapping, it is a graph-based SLAM 279 algorithm, composed by a C++ library and a ROS package. This library is an open source 280 library, and has been improved and extended since its beginning in a way that the closed 281 loop algorithm implements a memory management strategy [23]. 282 283 Its processing requires some distributed storage systems, these are short-term mem- 284 ory, work memory, and long-term memory. These all together optimize the localization 285 and mapping for long periods or in wide spaces, because they limit the size of the space 286 processed, so that the loop closure can be executed in a short time lapse [56,57]. 287 288 RTAB-Map implementation is based in a simultaneous processing. For graph-based 289 SLAM, as the map grows, the processing, optimization, assembly, and CPU load also grows. 290 Reason why, RTAB-Map stablishes a maximum response time at SLAM output, once it has 291 received the sensors data [23,58]. As the latest version of the algorithm admits 2D and 3D 292 LiDAR sensors and is capable of performing visual SLAM, the RTAB-Map 2D LiDAR based 293 SLAM option [23] was used for the tests performed in this work. 294 295 The list of parameters of RTAB-Map are shown in the table 5, they were taken from [59]. 296 297 Table 5. List of RTAB-Map parameters. Parameter Description Range Default value TimeThr Maximum time to update a map, in milliseconds with 0 as unlimited time 0.0 - 1.0 0 MemoryThr Maximum number of nodes in the work memory, where 0 means unlimited number of nodes 0.0 - 1.0 0 DetectionRate Detection ratio of images that RTAB-Map filters, value in Hz 0.1 - 10.0 1.0 ImageBufferSize Buffer size to save the data waiting for processing 0 - 10 1 MaxRetrieved Maximum amount of localizations returned at a time by the long-term memory 0 - 20 2 CreateIntermediateNodes Making of not inner nodes into the loop closure true/false false LoopThr Time threshold to execute loop closure 0.0 - 1.0 0.11 VarianceIgnored Ignore or not the variation of the restrictions true/false false FilteringStrategy Filtering defined for odometer data, two different strategies can be used, 0 means no filter, 1 means Kalman filter 0/1 0 2.6. Arenas used 298 Three different arenas simulated through GAZEBO were created to test the SLAM 299 algorithms. The differences between them are mainly based on the number of irregularities 300 per area that they have, and also, by the kind of path that they force the robot to follow. 301 302 2.6.1. Common Environments Arena 303 This arena simulates an apartment with a set of rooms and regular geometry objects 304 on it, in every single place there is a quite good number of irregularities, so that the robot 305 can easily handle the SLAM task, see figure 1 for reference. 306 307 2.6.2. Training Arena 308 This arena is used for algorithms characterization and calibration, but also for the 309 comparison trials. It is shown in figure 2. It can be considered as a middle point between 310 Common Environments Arena and Labyrinth Arena, since it has regular figures as Common 311 environments Arena does, but also has long corridors around the zero coordinate of the 312 arena, as Labyrinth Arena does. These are the reasons why it is used for characterization 313 and calibration of the algorithms. 314 315 Version May 2, 2022 submitted to Journal Not Specified 9 of 37 Figure 1. Common environments arena. Figure 2. Training arena. Version May 2, 2022 submitted to Journal Not Specified 10 of 37 The arena tries to challenge the algorithms with some sort of general asymmetry, and 316 with angled obstacles to see how good it is dealing with this kind of obstacles, the arena 317 itself is nothing but a corridor with a center room containing a single obstacle, however, 318 something that can slip past is that the number of irregularities per area is a bit lower than 319 with Common Environments Arena, but higher than Labyrinth Arena. 320 321 This arena is also considered for comparison trials, to reflect how good the characteri- 322 zation and calibration was. 323 324 2.6.3. Labyrinth Arena 325 This arena is the hardest of the three for the SLAM algorithms, at glance it shows a 326 labyrinth easy to follow, however, it is a very difficult environment to map by any SLAM 327 algorithm, as this arena challenges the algorithms with more complex obstacles and with 328 long corridors, without any irregularity that could help the algorithms to easily locate 329 themselves and recreate the environment map. These two reasons make this arena the 330 hardest for the test performed in the algorithm comparison. For reference see figure 3. 331 332 Figure 3. Labyrinth arena. 2.7. Trajectories used 333 There were fifteen trajectories used, six for Common environments Arena, six for Training 334 Arena, and three for Labyrinth Arena. The main objective of the trajectories is to make the 335 robot follow the arenas in diverse ways, first starting from coordinate zero (geometric 336 center of the arenas), then starting from a non-zero coordinate, and finally following twice 337 the trajectory starting from coordinate zero. All these three trajectories are followed in 338 the forward direction and then in reverse, except for the Labyrinth Arena, in which reverse 339 trajectories are the same than the forward direction, so only three trajectories were used in 340 this arena. The match between observations and scenario is shown in the table 6. 341 342 All these trajectories are shown in a simplified version of each arena in the figures 4a, 343 4b, and 4c for Training Arena, in the figures 5a, 5b, and 5c for Common Environments Arena, 344 and in figures 6a and 6b for Labyrinth Arena. In these figures, the yellow arrow indicates the 345 starting point and direction of the forward trajectory, and the tip of the red arrow indicates 346 the finishing point of this path. 347 Version May 2, 2022 submitted to Journal Not Specified 11 of 37 Table 6. Observation translation to scenarios. Observation Arena Trajectory Direction 1 Training arena zero coordinate Right 2 Training arena zero coordinate Reverse 3 Training arena non-zero coordinate Right 4 Training arena non-zero coordinate Reverse 5 Training arena Two laps Right 6 Training arena Two laps Reverse 7 Common environments arena zero coordinate Right 8 Common environments arena zero coordinate Reverse 9 Common environments arena non-zero coordinate Right 10 Common environments arena non-zero coordinate Reverse 11 Common environments arena Two laps Right 12 Common environments arena Two laps Reverse 13 Labyrinth arena zero coordinate Not applicable 14 Labyrinth arena non-zero coordinate Not applicable 15 Labyrinth arena Two laps Not applicable 2.8. Characterization and Calibration methods used 348 To characterize each of the algorithms, a statistical approach was taken, it is not sensi- 349 ble to the type of SLAM algorithm or sensors used, it is only sensible to the data provided 350 by each of the metrics for the trials, so that even SLAM approaches using sensors other 351 than LiDAR can be calibrated following this method, as long as the map representation is 352 compatible with the application of the knn-search metric, and the robot pose is obtained in 353 matching measurement units. However, a 2D LiDAR sensor approach was taken to match 354 the analysis with actual equipment available, and because these are the most common 355 sensor for the 2D SLAM approach, and especially applicable to low-cost robotic platforms. 356 357 The methodology focuses on finding statistical evidence of the effects of the algo- 358 rithms parameters on the output means of Pose Accuracy, Map Accuracy, CPU usage, and 359 Memory usage, it is important to highlight that this paper have used mean measurements 360 for characterization and calibration, but other descriptive statistical values can be used if 361 wanted. 362 363 There are three different stages for calibration, these are described below. 364 365 The first stage is only used when the algorithm has a large amount of parameters that 366 must be tuned, here comes into play the first statistical tool, which is a Plackett-Burman 367 experiment, which is a kind of Design of Experiments with a reduced amount of samples, but 368 with the weakness that only takes into account main effects, since main effects are aliased 369 with 2-way interactions (only the effect of each variable by itself can be obtained). With this 370 tool it can be ensured with some confidence level defined when analyzing the experiment 371 results, that a variable has an effect over an output. 372 373 Next, the second stage is when calibration comes into play. This part of the process 374 considers only the variables that demonstrated that, by themselves, have an effect over 375 at least one of the four outputs we are measuring, a full-factorial Design of Experiments is 376 used, it makes the combination of all the parameters in the ranges defined by the user, and 377 returns a Pareto chart and an equation, with these both we can determine which is the 378 best combination that reduces the error of the localization and mapping, or reduces the 379 resources usage, also with some confidence level defined when analyzing the experiment 380 results. 381 382 Version May 2, 2022 submitted to Journal Not Specified 12 of 37 (a) Zero coordinate (b) Non-zero coordinate (c) Two laps Figure 4. Training arena trajectories. Finally, to compare the algorithms, since the data obtained from each run not nec- 383 essarily shapes a Gaussian’s curve, the central limit theorem is used, population data is 384 considered the whole different tests that can be performed on these arenas with this robot 385 and with each algorithm, so that, calculating the mean of the means we can then compare 386 Version May 2, 2022 submitted to Journal Not Specified 13 of 37 (a) Zero coordinate (b) Non-zero coordinate (c) Two laps Figure 5. Common environments arena trajectories. this value between the values obtained from other algorithms (for a full-data comparison), 387 using the statistical tools that can be used with Gaussian-behaving samples, in this case 388 using hypothesis tests for the mean and the standard deviation of the means (Two-Sample T 389 for the mean, and Two-Sample Standard Deviation for the standard deviation of the means). 390 391 Version May 2, 2022 submitted to Journal Not Specified 14 of 37 (a) Zero coordinate (b) Non-zero coordinate Figure 6. Labyrinth arena trajectories. 3. Results 392 There are three main stages considered along this work, Characterization, Calibration, 393 and Comparison, these are explained in the following sections. 394 3.1. Characterization and calibration 395 Since the algorithms were already working and providing acceptable results to con- 396 sider them functional with the default parameters, a soft tuning with short modifications to 397 these parameters was performed, to enhance the performance for the training arena and 398 the simulated robot. 399 400 There were two statistical experiments and a set of hypothesis tests performed to tune 401 each algorithm. First, with a Plackett-Burman experiment, filter which parameters main 402 effects over each metric had enough statistical significance for the ranges of variation per 403 variable, next, with a full factorial experiment, tune these parameters to give the best output 404 for the metrics considered, and finally, confirm that the new parameters tune gives better 405 results than default parameters tuning with a set of hypothesis tests for the mean and/or 406 the standard deviation. This confirmation was performed with more than two trials, to be 407 able to take advantage of the central limit theorem and get valid hypothesis conclusions. 408 409 As disclaimer, Gmapping was not soft tuned for these trials, since it was already fully 410 tuned by a previous work [60]. 411 412 3.1.1. Cartographer 413 For Cartographer, its output had the problem that default parameters did not give 414 a good map accuracy, for this reason the soft tuning was focused on enhancing the map 415 accuracy. There were identified ten different parameters that could be more significant for 416 the general algorithm outputs, these are shown in the table 7. 417 418 After Plackett-Burman and Full Factorial designs only three of the listed parameters 419 were modified from their defaults, those can be seen in the final values of table 7. For the 420 improvement confirmation trials, there were five runs executed with default and improved 421 parameters, with 95% of confidence we can tell that map accuracy and pose accuracy 422 means were improved with the new parameters (given the figure 7), by performing a set of 423 Version May 2, 2022 submitted to Journal Not Specified 15 of 37 Table 7. Default, tuning and final values for Cartographer. Test Values Variable Default Minimum Maximum Final optimize_every_n_nodes 90 40 120 90 local_slam_translation_Weight 10e5 10e2 10e6 10e6 local_slam_rotation_weight 10e5 10e2 10e6 10e6 odometry_slam_translation_weight 10e5 10e2 10e6 10e2 odometry_slam_rotation_weight 10e5 10e2 10e6 10e5 ceres_scan_matcher.translation_weight 0.4 0.1 1.0 0.4 ceres_scan_matcher.rotation_weight 0.3 0.1 1.0 0.3 global_sampling_radio 0.0003 0.0001 0.0005 0.0003 submaps.resolution 0.0003 0.0001 0.0005 0.0003 constraint_builder.min_score 0.6 0.4 0.8 0.6 Figure 7. Map before and after the calibration of Cartographer. 2-Sample T tests, but at cost of memory usage degradation from default parameters. 424 425 3.1.2. HECTOR-SLAM 426 At the very beginning, with default parameters this algorithm showed up an adequate 427 performance for all the metrics, based on figure 8, so the experiment was focused on really 428 short variations to see if there might be an enhancement on the outputs. For that reason, 429 the parameters identified for the experiment were the ones shown in table 8. 430 431 Version May 2, 2022 submitted to Journal Not Specified 16 of 37 Figure 8. Map built with default parameters for HECTOR-SLAM. After all the experiments it was identified that none of the parameters had enough 432 statistical evidence to demonstrate any direct effect on the outputs. Furthermore, it was 433 evidenced that the best scenario for the four metrics was the default scenario, since all the 434 different variations have a worsened behavior from the default values. 435 436 Table 8. Default, tuning and final values for HECTOR-SLAM. Test Values Variable Default Minimum Maximum Final update_factor_free 0.40 0.10 0.45 0.40 update_factor_ocuppied 0.90 0.80 0.99 0.90 map_update_distance_thresh 0.40 0.10 1.00 0.40 map_update_angle_thresh 0.90 0.10 1.00 0.90 map_pub_period 2.00 1.00 5.00 2.00 3.1.3. KARTO-SLAM 437 For KARTO-SLAM, eighteen parameters were considered in the soft tuning stage, 438 these are shown in the table 9. After completing Plackett-Burman experiment only three 439 parameters surpassed the statistical limit to be considered relevant for CPU usage. A full 440 factorial experiment was executed over these parameters with the same variation ranges 441 used in Plackett-Burman experiment. 442 443 After completing the factorial experiment, it was obtained a scenario that improved 444 the output for each of the metrics, demonstrated through hypothesis tests over the mean 445 and standard deviation, using five runs with default versus new parameters. The improved 446 Version May 2, 2022 submitted to Journal Not Specified 17 of 37 Table 9. Default, tuning and final values for KARTO-SLAM. Test Values Variable Default Minimum Maximum Final scan_buffer_size 70 30 100 30 link_match_minimum_response_fine 0.12 0.06 0.18 0.18 loop_match_minimum_chain_size 10 5 15 15 loop_match_maximum_variance_coarse 0.4 0.3 0.5 0.3 loop_match_minimum_response_coarse 0.80 0.75 0.85 0.75 loop_match_minimum_response_fine 0.80 0.75 0.85 0.75 correlation_search_space_dimension 0.3 0.2 0.4 0.2 correlation_search_space_smear_deviation 0.03 0.03 0.04 0.04 loop_search_space_dimension 8.0 7.0 9.0 7.0 loop_search_space_smear_deviation 0.03 0.03 0.04 0.04 distance_variance_penalty 0.3 0.2 0.4 0.2 angle_variance_penalty 0.349 0.249 0.449 0.449 fine_search_angle_offset 0.00349 0.00249 0.00449 0.00449 coarse_search_angle_offset 0.349 0.249 0.449 0.449 coarse_angle_resolution 0.0349 0.0249 0.0449 0.0449 minimum_angle_penalty 0.90 0.85 0.95 0.95 minimum_distance_penalty 0.5 0.4 0.6 0.4 use_response_expansion False False True True map is showed in figure 9. 447 448 3.1.4. RTAB-Map 449 Since RTAB-Map has a boosted capabilities than others, its model was coupled to deal 450 only with 2D SLAM problem. With this, the relevant parameters were selected to tune, 451 those are shown in table 10. 452 453 Table 10. Default, tuning and final values for RTAB-MAP. Test values Variable Default Minimum Maximum Final TimeThr 0 0 1 0 MemThr 0 0 1 0 DetectionRate 1.0 0.1 10.0 0.1 ImageBufferSize 1 0 20 0 MaxRetrieved 2 1 10 1 CreateIntermediateNodes false false true false LoopThreshold 0.11 0.00 1.00 0.00 VarianceIgnorance false false true false FilteringStrategy No filtering No filtering Kalman filtering No filtering From filtering stage, there were identified 3 parameters with enough statistical rele- 454 vance for pose and map accuracy those were detectionRate for pose error, and timeThreshold 455 and LoopThreshold for map accuracy. A full factorial experiment was performed with these 456 parameters, obtaining a total of nine experiments to perform. With this factorial experiment 457 the parameters were tuned for the best scenario; their final values are shown in table 10. 458 459 After soft tuning, with six extra trials with the new parameters versus default parame- 460 ters, it was demonstrated with 90% of confidence that all the metrics perform better with 461 these new parameters configuration. Figure 10 is presented as proof of the improvement. 462 463 Version May 2, 2022 submitted to Journal Not Specified 18 of 37 Figure 9. Map before and after the calibration of KARTO-SLAM. Figure 10. Map before and after the calibration of RTAB-Map. Version May 2, 2022 submitted to Journal Not Specified 19 of 37 Figure 11. Pose accuracy mean behavior for Cartographer. Figure 12. CPU usage mean behavior for Cartographer. 3.2. Individual results 464 3.2.1. Cartographer 465 Results for Cartographer in terms of pose accuracy were quite stable throughout all the 466 different scenarios executed, excepting when executing labyrinth arena starting at non-zero 467 coordinate (observation 14 in figure 11), this is a special case where the robot starts in a 468 corridor without irregularities or landmarks to reference itself, making it accumulate the 469 error quickly, and it is unable to take it back to near zero. 470 471 In terms of CPU and memory usage, it can be noticeable that the longer the test the 472 higher the usage, since observations related to two laps show a higher CPU and memory 473 usage, as can be seen in figure 12 for CPU usage behavior, and figure 13 for memory usage 474 behavior. 475 476 In regards of map accuracy there are no trends by visually inspecting the results, as 477 there is no noticeable correlation to either arena type, trajectory type, or robot direction. 478 Version May 2, 2022 submitted to Journal Not Specified 20 of 37 Figure 13. Memory usage mean behavior for Cartographer. Figure 14. Map accuracy mean behavior for Cartographer. See figure 14 for reference. 479 480 3.2.2. Gmapping 481 In regards of pose accuracy behavior it is the same behavior obtained with Cartogra- 482 pher, figure 15 shows the time evolution of the pose error. The quick error increase at the 483 beginning of the test of the observation 14 (labyrinth arena starting at non-zero coordinate) 484 is quite visible, which is an expectable behavior because of the SLAM algorithms nature, as 485 was explained before in the Cartographer results. These overall results considering all the 486 tests for Gmapping can be found in figure 16. 487 488 Version May 2, 2022 submitted to Journal Not Specified 21 of 37 Figure 15. Pose error timeseries evolution for Gmapping on observation 14. Figure 16. Pose accuracy mean behavior for Gmapping. In relation to CPU and memory usage, the only trend noticeable was the correlation 489 between them, when CPU usage increased memory usage decreased and vice versa. After 490 a Pearson test to confirm this correlation, it resulted in a strong negative correlation of 491 -0.928. Figure 17 shows visually their behavior, that can be explained by the way Gmapping 492 manages its resources. Gmapping processes the particles on the fly [49], and this can result 493 in timelapses where CPU is full of other tasks and memory must store these particles while 494 CPU gets some time to process them. The same occur when the CPU has a high availability 495 for processing the particles, releasing the allocated memory. 496 497 For map accuracy there is no real trend noticeable by the dataset, as shown in figure 18. 498 499 Version May 2, 2022 submitted to Journal Not Specified 22 of 37 Figure 17. CPU and memory usage mean behavior for Gmapping. Figure 18. Map accuracy mean behavior for Gmapping. Version May 2, 2022 submitted to Journal Not Specified 23 of 37 3.2.3. HECTOR-SLAM 500 A general commentary on HECTOR-SLAM is its highly noticeable susceptibility to 501 environments without irregularities, where HECTOR-SLAM gets completely lost in terms 502 of map and pose accuracy. The empirical rules observed is that it gets lost when interprets 503 that the places are longer than they really are (long corridor issue) or interprets that the 504 robot is stopped in the last place it detected an irregularity. 505 506 This behavior can be observed mainly in the pose accuracy in figure 19, with the value 507 obtained in the observation 14 that is the labyrinth arena starting at non-zero coordinate, 508 as the robot begins its movement inside a corridor without irregularities or landmarks to 509 reference itself. This is similar to the results obtained for the Gmapping and Cartographer 510 SLAM algorithms. 511 512 Also, as the worst result is obtained for the observation 12 with a peak error value of 513 2255.05 m, which is the common environments arena (two laps in reverse). In this case, 514 the effect of running two laps instead of one has a negative effect on the metric. The cause 515 is associated with the algorithms difficulty to close the loop for this arena, which should 516 happen at about 1500 seconds in figure 20, that represents the timeseries plot for pose error 517 in observation 12. At first, the trial was considered an outlier, however upon repeating the 518 test under the same conditions used for the other trials gave a similar result. 519 520 Figure 19. Pose accuracy mean behavior for HECTOR-SLAM. Version May 2, 2022 submitted to Journal Not Specified 24 of 37 Figure 21. Map accuracy mean behavior for HECTOR-SLAM. Figure 20. Pose error timeseries evolution for HECTOR-SLAM on observation 12. As for the map accuracy, in figure 21 is visible that the output for common environ- 521 ments arena is better than the training arena, and that training arena is better than the 522 labyrinth arena (compare the visual mean of observations 7-12, 1-6, and 13-15 respectively). 523 This is confirmed by a hypothesis test between them at 90% confidence level, which is 524 associated to the number of irregularities per arena that lets the algorithm create a better 525 representation of the environment when there are more of them present. 526 527 Lastly, in regards of memory and CPU usage, it was verified that there is no wide 528 difference between them for the different scenarios, as figure 22 shows. It looks like the 529 memory usage is better when repeating the trajectories. Also, the algorithm is using about 530 15% of one single core. 531 532 Version May 2, 2022 submitted to Journal Not Specified 25 of 37 Figure 23. Pose accuracy mean behavior for KARTO-SLAM. Figure 22. CPU and memory usage timeseries evolution for HECTOR-SLAM. 3.2.4. KARTO-SLAM 533 Examining the results for pose accuracy, KARTO-SLAM had a quite stable behavior 534 (see figure 23), except for observation 14, which is the labyrinth arena starting at non-zero 535 coordinate, the root cause is the lack of irregularities at the beginning of the test, which 536 makes the algorithm wrongly estimate the pose of the robot and quickly accumulate a high 537 error for the pose. This behavior can be seen in figure 24 where it is clear that, at time 538 zero, the pose accuracy was quite good, however, after a brief time driving the arena, the 539 error goes up and keeps that way almost throughout the whole test, in a similar way as the 540 previously analyzed results of the other SLAM algorithms. 541 542 Next, for memory usage it was identified that the longer the test the higher the memory 543 usage, so that two lapped trials spent more memory compared to one lapped trial, it can be 544 seen in figure 25, observations 5 and 6 are the two lapped trials for training arena, 11 and 545 12 observations are the two lapped trials for common environments arena, and 15 is the 546 Version May 2, 2022 submitted to Journal Not Specified 26 of 37 Figure 24. Pose error evolution for KARTO-SLAM on observation 14. observation for two lapped trial for Labyrinth arena. Also, it was identified that both CPU 547 and memory usage had a highly evident correlation, confirmed with a Pearson test, giving 548 a correlation of 0.911 with a P-value of 2.4265e-06. 549 550 Figure 25. CPU and memory usage behavior for KARTO-SLAM on all the runs. Lastly, for map accuracy it was evidenced and statistically supported that the higher 551 the number of irregularities per area the better the map accuracy. With a 90% confidence 552 level it was confirmed that maps generated with common environments arena gave a more 553 accurate map (lower population mean) than training arena. The same thing occurs for the 554 training arena against labyrinth arena. It can be visually confirmed by looking at the figure 555 26, where observations 1 to 6 pertain to training arena, 7 to 12 to common environments 556 Version May 2, 2022 submitted to Journal Not Specified 27 of 37 Figure 26. Map accuracy mean behavior for KARTO-SLAM. Figure 27. Pose accuracy mean behavior for RTAB-Map. arena, and 13 to 15 to labyrinth arena. 557 558 3.2.5. RTAB-Map 559 Regarding pose accuracy, the algorithm behave as KARTO-SLAM did, with satisfac- 560 tory performance for all the trials excepting trial 14. This can be verified observing figure 27. 561 The cause is similar to the other SLAM algorithms results, since the error grew up quickly 562 at the beginning of the test and stood the same through the test. 563 564 For CPU and memory usage, it was identified a strong direct correlation between 565 them, visually evident through figure 28, but confirmed with a Pearson correlation test, 566 giving a correlation of 0.942 with a P-value of 1.6380e-07 at 95% confidence level. Visually, it 567 is also noticeable that CPU and memory usage grows when the tests late for longer periods, 568 since observations 5, 6, 11, 12, and 15, which are the two lapped trials, have higher means 569 compared to the trials on the same arena but running only one lap. 570 571 Version May 2, 2022 submitted to Journal Not Specified 28 of 37 Figure 28. CPU and memory usage means behavior for RTAB-Map. Lastly, in regards of RTAB-Map results analysis, map accuracy was noticeable better 572 performing on arenas with higher density of irregularities per area, since the maps obtained 573 were more accurate for common environments arena than for training arena. Same thing 574 for training arena against labyrinth arena, since the training arena gave better maps than 575 labyrinth arena. The figure 29 shows all the observations compared with each other. 576 Figure 29. Map accuracy mean behavior for RTAB-Map. Version May 2, 2022 submitted to Journal Not Specified 29 of 37 4. Algorithms Comparison 577 For algorithms comparison two statistical tools were used, a 2-Sample T and a 2-Sample 578 Standard Deviation using the central limit theorem. They compare the mean and standard 579 deviation of both samples, to conclude about the mean and standard deviation of their 580 populations at certain confidence level, in this case at 90% confidence level. 581 582 4.1. Pose accuracy 583 In regards of pose accuracy, it was quite hard to plot all the samples from all the 584 algorithms together because of their range differences. To solve this, a timeseries plot was 585 used showing all of them in separate plots, as it can be seen in figure 30. 586 587 Comparing visually the samples by their ranges, the main result was that RTAB-Map 588 performed better than KARTO-SLAM, which also performed better than Gmapping, fol- 589 lowed closely by Cartographer and by far HECTOR-SLAM performing worse than all of 590 them. However, with the dataset obtained, there was only evidence to demonstrate at 591 90% confidence level that RTAB-MAP population mean was lower than KARTO-SLAM’s 592 population mean, which also had a lower population mean than Gmapping, Cartographer, 593 and HECTOR-SLAM. Also, there was no evidence to demonstrate any difference on the 594 population mean and standard deviation between Gmapping and Cartographer, only to 595 demonstrate that both were superior to HECTOR-SLAM by their standard deviation, which 596 means that in terms of pose accuracy HECTOR-SLAM would give more variant results 597 through different scenarios than these two. 598 599 Figure 30. Pose accuracy mean behavior for all the algorithms together. The data used for this section can be referenced in the table 11. 600 601 4.2. Map accuracy 602 For map accuracy, the data presented in the figure 31 shows a box plot for all the 603 algorithms together, with a trend line centered on their means. With these results, it was 604 possible to confirm at 90% confidence level that RTAB-MAP outperformed all the other 605 Version May 2, 2022 submitted to Journal Not Specified 30 of 37 Table 11. Observation means for pose accuracy. Observation Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map 1 0.0213 0.2304 0.0900 0.1113 0.0219 2 0.1869 0.18645 0.3598 0.0704 .0297 3 0.1781 0.1424 0.1498 0.0874 0.0154 4 0.1385 0.2925 0.0424 0.0680 0.0157 5 0.4513 0.1572 0.0570 0.0965 0.0203 6 0.5102 0.1300 1.8450 0.0610 0.0206 Mean 0.2477 0.1898 0.4240 0.0824 0.0206 Standard Deviation 0.1908 0.0618 0.7057 0.0193 0.0052 7 0.0205 0.1879 0.0204 0.0912 0.0213 8 0.0395 0.2477 0.0395 0.0281 0.0204 9 0.0597 0.2087 0.0667 0.0543 0.0091 10 0.0803 0.1857 0.0090 0.0466 0.0165 11 0.2925 0.1126 36.2181 0.0794 0.0136 12 0.2424 0.1655 2255.0540 0.0411 0.0189 Mean 0.1225 0.1847 381.9320 0.0568 0.0167 Standard Deviation 0.1152 0.0450 917.7678 0.0240 0.0046 13 0.1113 0.3593 3.7858 0.0684 0.0179 14 4.1419 1.3317 28.7847 0.3438 0.1803 15 0.54303 0.5571 6.1407 0.0613 0.0168 Mean 2.2130 0.7493 12.9037 0.1579 0.0717 Standard Deviation 0.7994 0.5140 13.8036 0.1611 0.0941 Total Mean 1.0312 0.2997 155.5109 0.0873 0.0363 Total Standard Deviation 0.4678 0.3066 580.9297 0.0743 0.0515 algorithms, followed closely by KARTO-SLAM, then by Cartographer, next by Gmapping, 606 and finally by HECTOR-SLAM, which was impossible to demonstrate its difference to- 607 wards Gmapping by its mean, but not by its standard deviation. 608 609 Figure 31. Boxplot for map accuracy with all the algorithms together. The data used for this section can be referenced in the table 12. 610 611 4.3. CPU Usage 612 With respect to CPU usage, figure 32 shows a boxplot representation of all the algo- 613 rithms with all their sample means. This figure shows that HECTOR-SLAM outmatch the 614 Version May 2, 2022 submitted to Journal Not Specified 31 of 37 Table 12. Observation means for map accuracy. Observation Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map 1 11.3588 12.9296 11.5999 6.5996 3.4305 2 5.9485 17.5095 29.8954 4.2711 3.9316 3 7.2356 11.2784 9.0652 3.229 3.7825 4 6.2931 8.2523 4.9011 4.4964 3.3662 5 4.4964 7.4998 22.3552 7.3045 3.5179 6 3.3045 8.8719 56.0886 4.7307 2.9544 Mean 6.4395 11.0569 22.3176 5.1052 3.4972 Standard Deviation 2.7821 3.7546 18.9300 1.5360 0.3426 7 2.7658 11.2878 0.65188 1.6061 1.0293 8 3.5673 23.7350 1.0992 1.0173 0.9345 9 4.7307 26.2922 3.0891 1.7184 1.5389 10 5.4859 12.9511 1.7896 3.1379 1.2902 11 3.3662 18.8883 3.3990 1.7406 1.3364 12 7.6691 10.2534 9.8264 1.6209 1.4362 Mean 4.5975 17.2346 3.3092 1.8067 1.4940 Standard Deviation 1.7986 6.7746 3.3700 0.7046 0.4940 13 3.9316 20.3132 34.3039 9.0231 4.1498 14 4.9011 15.2718 37.7283 5.6719 4.5795 15 7.4998 19.7310 37.1870 7.8768 4.0350 Mean 5.4442 18.4387 36.4064 7.5239 4.2548 Standard Deviation 1.8450 2.7580 1.8408 1.7032 0.2870 Total Mean 5.5036 15.0043 17.5320 4.2696 3.0809 Total Standard Deviation 2.2658 5.8181 17.4748 2.5695 0.1071 other algorithms, followed closely by RTAB-Map, then by KARTO-SLAM, next by far from 615 Gmapping, and finally by Cartographer. This finds were verified for their means by four 616 hypothesis tests, all of them were demonstrated at 90% confidence level. 617 618 Figure 32. Boxplot for CPU usage with all the algorithms together. The data used for this section can be referenced in the table 13. 619 620 4.4. Memory Usage 621 For the last metric, in view of memory usage the data representation used was a set of 622 boxplots with a trendline pointing to their means, as seen in figure 33. From these results it 623 Version May 2, 2022 submitted to Journal Not Specified 32 of 37 Table 13. Observation means for CPU usage. Observation Cartographer Gmapping HECTOR-SLAM Karto RTAB-Map 1 79.9485 65.7767 14.9611 24.6362 21.9038 2 79.2709 64.4213 15.3271 25.1983 22.5627 3 85.2832 30.6667 15.4501 24.7791 19.1986 4 83.9482 29.9745 16.1314 27.2580 22.8742 5 006 33.0631 15.4562 33.0401 22.1754 6 126.6733 31.8696 15.6368 30.8826 23.0957 Mean 96.7863 42.6286 15.4938 27.6324 21.9684 Standard Deviation 22.8497 17.4426 0.3852 3.5494 1.4257 7 157.0573 64.2396 15.7638 22.8283 21.8038 8 106.4543 64.9399 15.3108 23.356 22.6580 9 103.8126 58.5111 15.2690 23.1910 21.0382 10 168.8664 64.2727 15.6305 22.3582 21.4474 11 226.6733 85.0477 16.0144 29.0134 033 12 225.4842 81.7984 15.2684 32.1010 27.6719 Mean 167.8463 69.8016 15.5420 25.4746 23.3506 Standard Deviation 54.2421 10.8534 0.3117 4.0706 2.6483 13 122.4997 33.6084 15.7692 34.9936 24.2222 14 135.0477 33.5639 13.8457 36.0513 27.0032 15 245.9916 38.4121 13.8691 48.7294 39.1275 Mean 167.8463 35.1948 14.4947 39.9248 30.1176 Standard Deviation 67.9660 2.7863 1.1038 7.6433 7.9257 Total Mean 138.1737 52.0110 15.3132 29.2278 25.1455 Total Standard Deviation 55.8754 19.6419 0.6646 7.1000 3.4543 was demonstrated with 90% of confidence that HECTOR-SLAM is the algorithm that best 624 manages memory resources, followed closely by KARTO-SLAM, then by far by Cartogra- 625 pher, next by RTAB-Map and finally by Gmapping. It was not possible to demonstrate any 626 population difference between Cartographer and Gmapping, neither between RTAB-MAP 627 and Gmapping, however there was enough evidence to demonstrate that Cartographer was 628 better performing than RTAB-Map by their means, and that population standard deviation 629 of Gmapping would be greater than population standard deviation of RTAB-Map, which is 630 the reason Gmapping is considered the worse of the algorithms for this metric. 631 632 Figure 33. Boxplot for Memory usage with all the algorithms together. Version May 2, 2022 submitted to Journal Not Specified 33 of 37 The data used for this section can be referenced in the table 14. 633 634 Table 14. Observation means for memory usage. Observation Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map 1 103.4278 52.3380 31.1261 30.1372 192.5342 2 98.0524 46.1813 31.0173 32.7514 191.6726 3 135.3839 358.2813 31.0016 32.5083 165.7151 4 133.9238 355.8691 30.7399 32.7924 188.1644 5 187.343 360.0965 30.9364 55.4142 185.3953 6 185.394 368.0437 30.8741 50.3001 206.5993 Mean 140.5875 256.8016 30.9492 38.9839 188.3468 Standard Deviation 38.6141 160.82501 0.1329 10.9123 13.2967 7 133.2000 43.8983 30.9629 35.7139 183.4044 8 138.2426 43.9963 30.9473 30.8605 183.2011 9 134.5829 46.5603 31.0558 31.2307 187.8639 10 101.3176 43.5573 30.9196 28.9994 189.1245 11 235.8990 57.8269 30.8073 44.9256 234.9545 12 214.3226 54.55 30.7999 51.1971 214.3226 Mean 159.5941 48.3982 30.9153 37.1545 198.8118 Standard Deviation 52.8997 6.2158 0.0980 8.9540 21.1739 13 122.4997 355.2520 30.9678 42.4248 218.5724 14 143.8983 358.9191 31.0103 42.4906 188.9716 15 283.2011 361.3482 31.0145 75.1085 306.7033 Mean 183.1997 358.5064 30.9975 53.3413 238.0824 Standard Deviation 87.2622 3.0690 0.0258 18.8510 61.2427 Total Mean 156.7126 193.7812 30.9454 41.1236 208.4137 Total Standard Deviation 53.7130 160.7094 0.1039 12.7525 25.7147 4.5. Algorithms comparison summary 635 To summarize based on the previous sections the table 15 was created. It shows in 636 a numbering scale which algorithm is the best, where one means the best of them. Also 637 the nomenclature M represents that its superiority or inferiority was demonstrated by 638 a 2-Sample T, and S represents that its superiority or inferiority was demonstrated by a 639 2-Sample Standard Deviation. 640 641 Table 15. Final results for all the metrics together. Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map Pose accuracy 3-MS 3-MS 4-S 2-M 1-M CPU usage 5-M 4-M 1-M 3-M 2-M Memory usage 3-M 5-S 1-M 2-M 4-S Map accuracy 3-M 4-S 5-S 2-M 1-M With the table 15, it can be stated that if map and pose accuracy are priorities, regard- 642 less of CPU and memory usage, then RTAB-Map is the preferred algorithm to use. But if 643 there are limited resources in the mobile robot platform, a better approach could be using 644 HECTOR-SLAM, with the highlight that it is the worse of them regarding map and pose 645 accuracy. 646 647 Nevertheless, a different approach can be taken, in order to classify all the algorithms 648 by their means in a range from zero to one hundred, where zero represents the algorithm 649 with the lowest mean, and 100 would be the algorithm with the highest mean. With this 650 classification, KARTO-SLAM comes up as the best choice between all of them, since is the 651 algorithm that shows the lowest average with this methodology. The equation (6) details 652 this approach, and the results obtained are shown in table 16. 653 654 Version May 2, 2022 submitted to Journal Not Specified 34 of 37 Figure 34. Radar plot with all the algorithms together. AveAlg = 1 4 M.Acc. ∑ Met=P.Acc. [ 100 ∗ X̄Alg − X̄Min X̄Max − X̄Min ] Met (6) Where: 655 • AveAlg Is the average to calculate, considering all the metrics. 656 • Met Is the metric to be averaged, either pose accuracy, map accuracy, CPU usage, or 657 memory usage. 658 • X̄Alg Is the sample mean obtained from the algorithm being analyzed. 659 • X̄Min Is the shortest sample mean obtained from any of the algorithms for that metric. 660 • X̄Max Is the largest sample mean obtained from any of the algorithms for that metric. 661 Table 16. Final results considering all the metrics means together. Pose accuracy CPU usage Memory usage Map accuracy Results Algorithm Mean Score Mean Score Mean Score Mean Score Mean Score Cartographer 0.4678 0.2821 138.1737 100 156.7126 73.3189 5.5036 18.0957 191.6966 47.9242 Gmapping 0.2997 0.1739 52.0110 29.8695 193.7812 94.9289 15.0044 82.7887 207.7610 51.9402 HECTOR-SLAM 155.5109 100 15.3132 0 30.9454 0 17.5320 100 200 50 KARTO-SLAM 0.0873 0.0373 29.2278 11.3255 41.1236 5.9336 4.2696 9.6930 26.9894 6.7474 RTAB-Map 0.0292 0 24.1511 7.1934 202.4799 100 2.8461 0 107.1934 26.7984 Min 0.0292 15.3132 30.9454 2.8461 Max 155.5109 138.1737 202.4799 17.5320 Based on the evidence of table 16 and figure 34, the result of evaluating the algorithms 662 by this procedure let to the conclusion that KARTO-SLAM brings the higher performance 663 considering the CPU and memory usage along with map and pose accuracy. Furthermore, 664 if the memory usage is not a limitation, RTAB-MAP has better results in all the other metrics, 665 followed by Cartographer, HECTOR-SLAM and the last one is Gmapping. 666 Version May 2, 2022 submitted to Journal Not Specified 35 of 37 5. Conclusions 667 The following are the main conclusions derived from the results of this work: 668 • The proposed methodology is useful to characterize, calibrate, and compare, any 669 SLAM algorithm, no matter the robot sensors or SLAM type, as long as the map 670 representation is compatible with the application of the knn-search metric, and the robot 671 pose is obtained in matching measurement units, since the proposed characterization 672 and calibration is based on the final results of the SLAM algorithms, rather than on 673 their internal structure or on the sensors these algorithms use. The method proposed in 674 this paper provides strong statistical evidence, based on the pose error, map accuracy, 675 CPU usage, and memory usage, with descriptive and inferential statistics to bring 676 confidence levels about overall behavior of the algorithms and their comparisons. 677 • It was quite noticeable that KARTO-SLAM outperformed all the other algorithms 678 because it balances the use of resources and holds a good SLAM performance, just by 679 looking at figure 34 or by checking table 16. 680 • Without considering resources usage, the best algorithm is RTAB-Map, which really 681 does an excellent job at mapping and calculating its own pose into the map. 682 • HECTOR-SLAM outperformed when saving resources is the feature that matters, 683 providing statistical evidence that it is the one which uses less CPU and memory than 684 the other algorithms, however it is the one that gave the worst results when talking 685 about localization and mapping. 686 • Localization metric (pose accuracy) gets worse as obstacle density decreases for all 687 algorithms, and this is something that makes sense, since SLAM algorithms require 688 irregularities to be able to refer the robot to this irregularity, without them, it must trust 689 on its odometry system, which is less accurate because it does not consider wheels 690 slippage, dimensional irregularities in robot model, etc. 691 • There was an hypothesis that repeating the trajectories two times would enhance the 692 localization and mapping output. However, there was no enhancement noticed for 693 both these metrics with statistical support. 694 • There was provided statistical evidence that, starting at a coordinate without any 695 irregularity for the robot to reference itself, can become a highly important issue that 696 it may not be able to correct in regards to pose accuracy. Confirmed through the 697 experiments performed in the labyrinth arena, when starting at a non-zero coordinate, 698 the pose error grows quickly and all the algorithms had troubles in correcting this 699 failure as the simulation continues, situation that does not happen this way when 700 starting at zero coordinate, where there are good enough irregularities for the robot to 701 locate itself. 702 As future work, the method can be extended to consider extended test time and 703 bigger areas in the arenas, to determine the best algorithms for these cases of indoor SLAM 704 applications. Also new metrics can be defined for 3D SLAM and cooperative distributed 705 SLAM algorithms that do not have a compatible map representation for the application of 706 the knn-search metric. 707 Funding: This research was funded by Vicerrectoría de Investigación de la Universidad de Costa 708 Rica grant number 322-B8-298 and 322-C0-611. The APC was funded by Universidad de Costa Rica. 709 Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design 710 of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or 711 in the decision to publish the results. 712 713 Version May 2, 2022 submitted to Journal Not Specified 36 of 37 References 714 1. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part I The Essential Algorithms. IEEE robotics & 715 automation magazine 2006, 13, 99–110. 716 2. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of 717 simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on robotics 2016, 32, 1309– 718 1332. 719 3. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous 720 Driving. IEEE Transactions on Intelligent Vehicles 2017, 2, 194–220. doi:10.1109/TIV.2017.2749181. 721 4. Singandhupe, A.; La, H.M. A Review of SLAM Techniques and Security in Autonomous Driving. 2019 Third IEEE International 722 Conference on Robotic Computing (IRC), 2019, pp. 602–607. doi:10.1109/IRC.2019.00122. 723 5. Lee, S.; Kim, H.; Lee, B. An Efficient Rescue System with Online Multi-Agent SLAM Framework. Sensors 2020, 20. 724 doi:10.3390/s20010235. 725 6. Guth, F.; Silveira, L.; Botelho, S.; Drews, P.; Ballester, P. Underwater SLAM: Challenges, state of the art, algorithms and a new 726 biologically-inspired approach. 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, 727 2014, pp. 981–986. doi:10.1109/BIOROB.2014.6913908. 728 7. González-García, J.; Gómez-Espinosa, A.; Cuan-Urquizo, E.; García-Valdovinos, L.G.; Salgado-Jiménez, T.; Cabello, J.A.E. 729 Autonomous Underwater Vehicles: Localization, Navigation, and Communication for Collaborative Missions. Applied Sciences 730 2020, 10. doi:10.3390/app10041256. 731 8. Zou, D.; Tan, P.; Yu, W. Collaborative visual SLAM for multiple agents:A brief survey. Virtual Reality & Intelligent Hardware 2019, 732 1, 461–482. doi:https://doi.org/10.1016/j.vrih.2019.09.002. 733 9. Stachniss, C.; Leonard, J.J.; Thrun, S., Simultaneous Localization and Mapping. In Springer Handbook of Robotics; Springer 734 International Publishing, 2016; pp. 1153–1176. doi:10.1007/978-3-319-32552-1_46. 735 10. Simon, D. Optimal state estimation: Kalman, H infinity, and nonlinear approaches; John Wiley & Sons, 2006. 736 11. Marín, L.; Vallés, M.; Soriano, A.; Valera, A.; Albertos, P. Event-Based Localization in Ackermann Steering Limited Resource 737 Mobile Robots. IEEE/ASME Transactions on Mechatronics 2014, 19, 1171–1182. doi:10.1109/TMECH.2013.2277271. 738 12. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II State of the Art. IEEE robotics & automation 739 magazine 2006, 13, 108–117. 740 13. Metropolis, N.; Ulam, S. The Monte Carlo Method. American Statistical Association 1949, 44, 335–341. 741 14. Marín, L.; Vallés, M.; Soriano, A.; Valera, A.; Albertos, P. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of 742 Limited Resource Mobile Robots. Sensors 2013, 13, 14133–14160. doi:10.3390/s131014133. 743 15. Doucet, A.; Freitas, N.d.; Murphy, K.P.; Russell, S.J. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. 744 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence; Morgan Kaufmann Publishers Inc.: San Francisco, 745 CA, USA, 2000; p. 176–183. 746 16. Mohamad Yatim, N.; Buniyamin, N. Particle filter in simultaneous localization and mapping (SLAM) using differential drive 747 mobile robot. Jurnal Teknologi 2015, 77. doi:10.11113/jt.v77.6557. 748 17. Yagfarov, R.; Ivanou, M.; Afanasyev, I. Map comparison of LiDAR-based 2D SLAM algorithms using precise ground truth. 2018 749 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2018, pp. 1979–1983. 750 18. Kohlbrecher, S.; Meyer, J.; von Stryk, O.; Klingauf, U. A Flexible and Scalable SLAM System with Full 3D Motion Estimation. 751 Proc. IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). IEEE, 2011. 752 19. Grisetti, G.; Kummerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation Systems 753 Magazine 2010, 2, 31–43. doi:10.1109/mits.2010.939925. 754 20. Tee, Y.K.; Han, Y.C. Lidar-Based 2D SLAM for Mobile Robot in an Indoor Environment: A Review. 2021 International Conference 755 on Green Energy, Computing and Sustainable Technology (GECOST), 2021, pp. 1–7. doi:10.1109/GECOST52368.2021.9538731. 756 21. A. Nüchter, M. Bleier, J.S.; Janotta, P. Continuous-Time SLAM Improving Google’s Cartographer 3D Mapping. In Latest 757 Developments in Reality-Based 3D Surveying and Modelling; Gonzalez-Aguilera, D., Ed.; MDPI Books, 2018; p. 1. 758 22. Le, X.S.; Fabresse, L.; Bouraqadi, N.; Lozenguez, G. Evaluation of out-of-the-box ROS 2D SLAMs for autonomous exploration of 759 unknown indoor environments. International Conference on Intelligent Robotics and Applications. Springer, 2018, pp. 283–296. 760 23. Labbé, M.; Michaud, F. RTAB-Map as an open-source LiDAR and visual simultaneous localization and mapping library for 761 large-scale and long-term online operation. Journal of Field Robotics 2019, 36, 416–446. 762 24. Servières, M.; Renaudin, V.; DUPUIS, A.; ANTIGNY, N. Visual and Visual-Inertial SLAM: State of the Art, Classification, and 763 Experimental Benchmarking. Journal of Sensors 2021, 2021, 26 p. doi:10.1155/2021/2054828. 764 25. Chen, C.; Zhu, H.; Li, M.; You, S. A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and 765 Optimization-Based Perspectives. Robotics 2018, 7. doi:10.3390/robotics7030045. 766 26. Macario Barros, A.; Michel, M.; Moline, Y.; Corre, G.; Carrel, F. A Comprehensive Survey of Visual SLAM Algorithms. Robotics 767 2022, 11. doi:10.3390/robotics11010024. 768 27. Huang, L. Review on LiDAR-based SLAM Techniques. 2021 International Conference on Signal Processing and Machine 769 Learning (CONF-SPML), 2021, pp. 163–168. doi:10.1109/CONF-SPML54095.2021.00040. 770 28. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Proceedings of Robotics: Science and Systems; , 2014. 771 doi:10.15607/RSS.2014.X.007. 772 https://doi.org/10.1109/TIV.2017.2749181 https://doi.org/10.1109/IRC.2019.00122 https://doi.org/10.3390/s20010235 https://doi.org/10.1109/BIOROB.2014.6913908 https://doi.org/10.3390/app10041256 https://doi.org/https://doi.org/10.1016/j.vrih.2019.09.002 https://doi.org/10.1007/978-3-319-32552-1_46 https://doi.org/10.1109/TMECH.2013.2277271 https://doi.org/10.3390/s131014133 https://doi.org/10.11113/jt.v77.6557 https://doi.org/10.1109/mits.2010.939925 https://doi.org/10.1109/GECOST52368.2021.9538731 https://doi.org/10.1155/2021/2054828 https://doi.org/10.3390/robotics7030045 https://doi.org/10.3390/robotics11010024 https://doi.org/10.1109/CONF-SPML54095.2021.00040 https://doi.org/10.15607/RSS.2014.X.007 Version May 2, 2022 submitted to Journal Not Specified 37 of 37 29. Santos, J.M.; Portugal, D.; Rocha, R.P. An evaluation of 2D SLAM techniques available in Robot Operating System. 2013 IEEE 773 International Symposium on Safety, Security, and Rescue Robotics (SSRR) 2013, pp. 1–6. 774 30. Filipenko, M.; Afanasyev, I. Comparison of Various SLAM Systems for Mobile Robot in an Indoor Environment. 2018 International 775 Conference on Intelligent Systems (IS) 2018, pp. 400–407. 776 31. Ngo, D.T.; Pham, H.A. Towards a Framework for SLAM Performance Investigation on Mobile Robots. 2020 International Confer- 777 ence on Information and Communication Technology Convergence (ICTC), 2020, pp. 110–115. doi:10.1109/ICTC49870.2020.9289428. 778 32. Zhang, Y.; Zhang, T.; Huang, S. Comparison of EKF based SLAM and optimization based SLAM algorithms. 2018 13th IEEE 779 Conference on Industrial Electronics and Applications (ICIEA) 2018. doi:10.1109/iciea.2018.8397911. 780 33. Machado, J.; Portugal, D.; Rocha, R.P. An evaluation of 2D SLAM techniques available in Robot Operating System. 2013 IEEE 781 International Symposium on Safety, Security, and Rescue Robotics (SSRR) 2013. doi:10.1109/ssrr.2013.6719348. 782 34. Kurt-Yavuz, Z.; Yavuz, S. A comparison of EKF, UKF, FastSLAM2.0, and UKF-based FastSLAM algorithms. 2012 IEEE 16th 783 International Conference on Intelligent Engineering Systems (INES) 2012. doi:10.1109/ines.2012.6249866. 784 35. Silva, B.M.F.D.; Xavier, R.S.; Nascimento, T.P.D.; Goncalves, L.M. Experimental evaluation of ROS compatible SLAM algorithms 785 for RGB-D sensors. 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR) 2017. 786 doi:10.1109/sbr-lars-r.2017.8215331. 787 36. Marín, L. Modular Open Hardware Omnidirectional Platform for Mobile Robot Research. 2018 IEEE 2nd Colombian Conference 788 on Robotics and Automation (CCRA), 2018, pp. 1–6. doi:10.1109/CCRA.2018.8588120. 789 37. TurtleBot 3 Simulation. url=https://emanual.robotis.com/docs/en/platform/turtlebot3/simulation/. Accessed: 2022-03-22. 790 38. Trejos, K. Robot_Pose_Publisher, 2021. doi:10.5281/zenodo.1234. 791 39. CPU Monitor ROS Node. url=https://github.com/alspitz/cpu_monitor. Accessed: 2022-03-22. 792 40. nav_node. url=https://github.com/LauraRincon/nav_node. Accessed: 2022-03-27. 793 41. Trejos, K. Ground-Truth-Generator, 2021. doi:10.5281/zenodo.1234. 794 42. Trejos, K. knnsearch_for_SLAM, 2021. doi:10.5281/zenodo.1234. 795 43. Trejos, K. Absolute-Pose-Error, 2021. doi:10.5281/zenodo.1234. 796 44. Trejos, K. Topic-CPU-MEM-usage-plotter, 2021. doi:10.5281/zenodo.1234. 797 45. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. 2016 IEEE International Conference on 798 Robotics and Automation (ICRA), 2016, pp. 1271–1278. doi:10.1109/ICRA.2016.7487258. 799 46. Duncan, M.L.; Bryant, A.R. Connection Cartographer: Geographically Representing Host-Based Network Connections in 800 Real-Time with a Focus on Usability. 2016 International Conference on Collaboration Technologies and Systems (CTS), 2016, pp. 801 294–301. doi:10.1109/CTS.2016.0062. 802 47. Krinkin, K.; Filatov, A.; yom Filatov, A.; Huletski, A.; Kartashov, D. Evaluation of modern laser based indoor SLAM algorithms. 803 2018 22nd Conference of Open Innovations Association (FRUCT). IEEE, 2018, pp. 101–106. 804 48. Google. Cartographer ROS Tuning methodology. https://google-cartographer-ros.readthedocs.io/en/latest/tuning.html, 2021. 805 Accessed: 2022-03-29. 806 49. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters. IEEE 807 Transactions on Robotics 2007, 23, 34–46. doi:10.1109/tro.2006.889486. 808 50. Gerkey, B. Gmapping wiki. http://wiki.ros.org/gmapping, 2019. Accessed: 2022-04-29. 809 51. Kohlbrecher, S. Hector mapping wiki. http://wiki.ros.org/hector_mapping, 2021. Accessed: 2022-04-29. 810 52. Xuexi, Z.; Guokun, L.; Genping, F.; Dongliang, X.; Shiliu, L. SLAM Algorithm Analysis of Mobile Robot Based on LiDAR. 2019 811 Chinese Control Conference (CCC). IEEE, 2019, pp. 4739–4745. 812 53. Duchon, F.; Hazık, J.; Rodina, J.; Tolgyessy, M.; Dekan, M.; Sojka, A. Verification of SLAM Methods Implemented in ROS. Journal 813 of Multidisciplinary Engineering Science and Technology (JMEST), 6. 814 54. Jelìnek, L. Graph-based SLAM on Normal Distributions Transform Occupancy Map. Bachelor’s thesis, Univerzita Karlova, 815 Matematicko-fyzikální fakulta, 2016. 816 55. Fix, J. slam_karto. http://wiki.ros.org/slam_karto, 2019. 817 56. Labbe, M.; Michaud, F. Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Transactions 818 on Robotics 2013, 29, 734–745. 819 57. López Torres, P. Análisis de algoritmos para localización y mapeado simultáneo de objetos. Master’s thesis, Departamento de 820 Ingeniería de Sistemas y Automática, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, 2016. 821 58. Das, S. Simultaneous Localization and Mapping (SLAM) using RTAB-MAP. International Journal of Scientific and Engineering 822 Research 2018, 9, [1809.02989]. 823 59. Labbe, M. RTAB-Map wiki. http://wiki.ros.org/rtabmap_ros, 2021. Accessed: 2022-04-29. 824 60. Valverde, E. Implementación de un sistema de mapeo y localización simultánea (SLAM) en un robot omnidireccional mecanum. 825 Bachelor’s thesis, Escuela de Ingeniería Eléctrica, Universidad de Costa Rica, 2018. 826 https://doi.org/10.1109/ICTC49870.2020.9289428 https://doi.org/10.1109/iciea.2018.8397911 https://doi.org/10.1109/ssrr.2013.6719348 https://doi.org/10.1109/ines.2012.6249866 https://doi.org/10.1109/sbr-lars-r.2017.8215331 https://doi.org/10.1109/CCRA.2018.8588120 https://doi.org/10.5281/zenodo.1234 https://doi.org/10.5281/zenodo.1234 https://doi.org/10.5281/zenodo.1234 https://doi.org/10.5281/zenodo.1234 https://doi.org/10.5281/zenodo.1234 https://doi.org/10.1109/ICRA.2016.7487258 https://doi.org/10.1109/CTS.2016.0062 https://doi.org/10.1109/tro.2006.889486 http://wiki.ros.org/slam_karto http://xxx.lanl.gov/abs/1809.02989 Introduction Materials and Methods Generalities Simulation needs Data processing needs Data analysis needs SLAM Algorithms used Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map Arenas used Common Environments Arena Training Arena Labyrinth Arena Trajectories used Characterization and Calibration methods used Results Characterization and calibration Cartographer HECTOR-SLAM KARTO-SLAM RTAB-Map Individual results Cartographer Gmapping HECTOR-SLAM KARTO-SLAM RTAB-Map Algorithms Comparison Pose accuracy Map accuracy CPU Usage Memory Usage Algorithms comparison summary Conclusions References