Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization

Kazuto Nakashima, Hojung Jung, Yuki Oto, Yumi Iwashita, Ryo Kurazume, Oscar Martinez Mozos

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

Original languageEnglish
Pages (from-to)750-765
Number of pages16
JournalAdvanced Robotics
Volume32
Issue number14
DOIs
Publication statusAccepted/In press - Jan 1 2018

Fingerprint

Neural networks
Parking
Coastal zones
Navigation
Railroad cars
Semantics
Robots

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Software
  • Human-Computer Interaction
  • Hardware and Architecture
  • Computer Science Applications

Cite this

Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization. / Nakashima, Kazuto; Jung, Hojung; Oto, Yuki; Iwashita, Yumi; Kurazume, Ryo; Mozos, Oscar Martinez.

In: Advanced Robotics, Vol. 32, No. 14, 01.01.2018, p. 750-765.

Research output: Contribution to journalArticle

Nakashima, Kazuto ; Jung, Hojung ; Oto, Yuki ; Iwashita, Yumi ; Kurazume, Ryo ; Mozos, Oscar Martinez. / Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization. In: Advanced Robotics. 2018 ; Vol. 32, No. 14. pp. 750-765.
@article{f67c1df8754248d8a411c6c37a4cd7f6,
title = "Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization",
abstract = "Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.",
author = "Kazuto Nakashima and Hojung Jung and Yuki Oto and Yumi Iwashita and Ryo Kurazume and Mozos, {Oscar Martinez}",
year = "2018",
month = "1",
day = "1",
doi = "10.1080/01691864.2018.1501279",
language = "English",
volume = "32",
pages = "750--765",
journal = "Advanced Robotics",
issn = "0169-1864",
publisher = "Taylor and Francis Ltd.",
number = "14",

}

TY - JOUR

T1 - Learning geometric and photometric features from panoramic LiDAR scans for outdoor place categorization

AU - Nakashima, Kazuto

AU - Jung, Hojung

AU - Oto, Yuki

AU - Iwashita, Yumi

AU - Kurazume, Ryo

AU - Mozos, Oscar Martinez

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

AB - Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occlusions by cars and pedestrians. This paper presents a novel method of categorizing outdoor places using convolutional neural networks (CNNs), which take omnidirectional depth/reflectance images obtained by 3D LiDARs as the inputs. First, we construct a large-scale outdoor place dataset named Multi-modal Panoramic 3D Outdoor (MPO) comprising two types of point clouds captured by two different LiDARs. They are labeled with six outdoor place categories: coast, forest, indoor/outdoor parking, residential area, and urban area. Second, we provide CNNs for LiDAR-based outdoor place categorization and evaluate our approach with the MPO dataset. Our results on the MPO dataset outperform traditional approaches and show the effectiveness in which we use both depth and reflectance modalities. To analyze our trained deep networks, we visualize the learned features.

UR - http://www.scopus.com/inward/record.url?scp=85051145962&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051145962&partnerID=8YFLogxK

U2 - 10.1080/01691864.2018.1501279

DO - 10.1080/01691864.2018.1501279

M3 - Article

AN - SCOPUS:85051145962

VL - 32

SP - 750

EP - 765

JO - Advanced Robotics

JF - Advanced Robotics

SN - 0169-1864

IS - 14

ER -