Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface

Ryo Furukawa, Ryusuke Sagawa, Hiroshi Kawasaki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Shape reconstruction techniques using structured light have been widely researched and developed due to their robustness, high precision, and density. Because the techniques are based on decoding a pattern to find correspondences, it implicitly requires that the projected patterns be clearly captured by an image sensor, i.e., to avoid defocus and motion blur of the projected pattern. Although intensive researches have been conducted for solving defocus blur, few researches for motion blur and only solution is to capture with extremely fast shutter speed. In this paper, unlike the previous approaches, we actively utilize motion blur, which we refer to as a light flow, to estimate depth. Analysis reveals that minimum two light flows, which are retrieved from two projected patterns on the object, are required for depth estimation. To retrieve two light flows at the same time, two sets of parallel line patterns are illuminated from two video projectors and the size of motion blur of each line is precisely measured. By analyzing the light flows, i.e. lengths of the blurs, scene depth information is estimated. In the experiments, 3D shapes of fast moving objects, which are inevitably captured with motion blur, are successfully reconstructed by our technique.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4650-4658
Number of pages9
ISBN (Electronic)9781538610329
DOIs
Publication statusPublished - Dec 22 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2017-October
ISSN (Print)1550-5499

Other

Other16th IEEE International Conference on Computer Vision, ICCV 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Flow patterns
Image sensors
Decoding
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Furukawa, R., Sagawa, R., & Kawasaki, H. (2017). Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 (pp. 4650-4658). [8237759] (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCV.2017.497

Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface. / Furukawa, Ryo; Sagawa, Ryusuke; Kawasaki, Hiroshi.

Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 4650-4658 8237759 (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Furukawa, R, Sagawa, R & Kawasaki, H 2017, Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface. in Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017., 8237759, Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, Institute of Electrical and Electronics Engineers Inc., pp. 4650-4658, 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCV.2017.497
Furukawa R, Sagawa R, Kawasaki H. Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 4650-4658. 8237759. (Proceedings of the IEEE International Conference on Computer Vision). https://doi.org/10.1109/ICCV.2017.497
Furukawa, Ryo ; Sagawa, Ryusuke ; Kawasaki, Hiroshi. / Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface. Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 4650-4658 (Proceedings of the IEEE International Conference on Computer Vision).
@inproceedings{d4b9459926564d0b94c67c010896fb84,
title = "Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface",
abstract = "Shape reconstruction techniques using structured light have been widely researched and developed due to their robustness, high precision, and density. Because the techniques are based on decoding a pattern to find correspondences, it implicitly requires that the projected patterns be clearly captured by an image sensor, i.e., to avoid defocus and motion blur of the projected pattern. Although intensive researches have been conducted for solving defocus blur, few researches for motion blur and only solution is to capture with extremely fast shutter speed. In this paper, unlike the previous approaches, we actively utilize motion blur, which we refer to as a light flow, to estimate depth. Analysis reveals that minimum two light flows, which are retrieved from two projected patterns on the object, are required for depth estimation. To retrieve two light flows at the same time, two sets of parallel line patterns are illuminated from two video projectors and the size of motion blur of each line is precisely measured. By analyzing the light flows, i.e. lengths of the blurs, scene depth information is estimated. In the experiments, 3D shapes of fast moving objects, which are inevitably captured with motion blur, are successfully reconstructed by our technique.",
author = "Ryo Furukawa and Ryusuke Sagawa and Hiroshi Kawasaki",
year = "2017",
month = "12",
day = "22",
doi = "10.1109/ICCV.2017.497",
language = "English",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "4650--4658",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017",
address = "United States",

}

TY - GEN

T1 - Depth Estimation Using Structured Light Flow - Analysis of Projected Pattern Flow on an Object's Surface

AU - Furukawa, Ryo

AU - Sagawa, Ryusuke

AU - Kawasaki, Hiroshi

PY - 2017/12/22

Y1 - 2017/12/22

N2 - Shape reconstruction techniques using structured light have been widely researched and developed due to their robustness, high precision, and density. Because the techniques are based on decoding a pattern to find correspondences, it implicitly requires that the projected patterns be clearly captured by an image sensor, i.e., to avoid defocus and motion blur of the projected pattern. Although intensive researches have been conducted for solving defocus blur, few researches for motion blur and only solution is to capture with extremely fast shutter speed. In this paper, unlike the previous approaches, we actively utilize motion blur, which we refer to as a light flow, to estimate depth. Analysis reveals that minimum two light flows, which are retrieved from two projected patterns on the object, are required for depth estimation. To retrieve two light flows at the same time, two sets of parallel line patterns are illuminated from two video projectors and the size of motion blur of each line is precisely measured. By analyzing the light flows, i.e. lengths of the blurs, scene depth information is estimated. In the experiments, 3D shapes of fast moving objects, which are inevitably captured with motion blur, are successfully reconstructed by our technique.

AB - Shape reconstruction techniques using structured light have been widely researched and developed due to their robustness, high precision, and density. Because the techniques are based on decoding a pattern to find correspondences, it implicitly requires that the projected patterns be clearly captured by an image sensor, i.e., to avoid defocus and motion blur of the projected pattern. Although intensive researches have been conducted for solving defocus blur, few researches for motion blur and only solution is to capture with extremely fast shutter speed. In this paper, unlike the previous approaches, we actively utilize motion blur, which we refer to as a light flow, to estimate depth. Analysis reveals that minimum two light flows, which are retrieved from two projected patterns on the object, are required for depth estimation. To retrieve two light flows at the same time, two sets of parallel line patterns are illuminated from two video projectors and the size of motion blur of each line is precisely measured. By analyzing the light flows, i.e. lengths of the blurs, scene depth information is estimated. In the experiments, 3D shapes of fast moving objects, which are inevitably captured with motion blur, are successfully reconstructed by our technique.

UR - http://www.scopus.com/inward/record.url?scp=85041927952&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041927952&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2017.497

DO - 10.1109/ICCV.2017.497

M3 - Conference contribution

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 4650

EP - 4658

BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -