Data acquisition and management system of LHD

H. Nakanishi, M. Ohsuna, M. Kojima, S. Imazu, M. Nonomura, Makoto Hasegawa, Kazuo Nakamura, A. Higashijima, M. Yoshikawa, M. Emoto, T. Yamamoto, Y. Nagayama, K. Kawahata

研究成果: ジャーナルへの寄稿記事

13 引用 (Scopus)

抄録

The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of realtime streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAM AC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LAB-COM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.

元の言語英語
ページ(範囲)445-457
ページ数13
ジャーナルFusion Science and Technology
58
発行部数1
DOI
出版物ステータス出版済み - 1 1 2010

Fingerprint

management systems
Information management
data acquisition
Data acquisition
distributed processing
computer aided manufacturing
data management
plasma diagnostics
Experiments
analog to digital converters
files
Plasma diagnostics
shot
availability
explosions
acquisition
alternating current
rays
Computer aided manufacturing
fusion

All Science Journal Classification (ASJC) codes

  • Civil and Structural Engineering
  • Nuclear and High Energy Physics
  • Nuclear Energy and Engineering
  • Materials Science(all)
  • Mechanical Engineering

これを引用

Nakanishi, H., Ohsuna, M., Kojima, M., Imazu, S., Nonomura, M., Hasegawa, M., ... Kawahata, K. (2010). Data acquisition and management system of LHD. Fusion Science and Technology, 58(1), 445-457. https://doi.org/10.13182/FST10-A10830

Data acquisition and management system of LHD. / Nakanishi, H.; Ohsuna, M.; Kojima, M.; Imazu, S.; Nonomura, M.; Hasegawa, Makoto; Nakamura, Kazuo; Higashijima, A.; Yoshikawa, M.; Emoto, M.; Yamamoto, T.; Nagayama, Y.; Kawahata, K.

:: Fusion Science and Technology, 巻 58, 番号 1, 01.01.2010, p. 445-457.

研究成果: ジャーナルへの寄稿記事

Nakanishi, H, Ohsuna, M, Kojima, M, Imazu, S, Nonomura, M, Hasegawa, M, Nakamura, K, Higashijima, A, Yoshikawa, M, Emoto, M, Yamamoto, T, Nagayama, Y & Kawahata, K 2010, 'Data acquisition and management system of LHD', Fusion Science and Technology, 巻. 58, 番号 1, pp. 445-457. https://doi.org/10.13182/FST10-A10830
Nakanishi H, Ohsuna M, Kojima M, Imazu S, Nonomura M, Hasegawa M その他. Data acquisition and management system of LHD. Fusion Science and Technology. 2010 1 1;58(1):445-457. https://doi.org/10.13182/FST10-A10830
Nakanishi, H. ; Ohsuna, M. ; Kojima, M. ; Imazu, S. ; Nonomura, M. ; Hasegawa, Makoto ; Nakamura, Kazuo ; Higashijima, A. ; Yoshikawa, M. ; Emoto, M. ; Yamamoto, T. ; Nagayama, Y. ; Kawahata, K. / Data acquisition and management system of LHD. :: Fusion Science and Technology. 2010 ; 巻 58, 番号 1. pp. 445-457.
@article{46a1111bbaeb49ad92a5be74dedc56dc,
title = "Data acquisition and management system of LHD",
abstract = "The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of realtime streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAM AC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LAB-COM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.",
author = "H. Nakanishi and M. Ohsuna and M. Kojima and S. Imazu and M. Nonomura and Makoto Hasegawa and Kazuo Nakamura and A. Higashijima and M. Yoshikawa and M. Emoto and T. Yamamoto and Y. Nagayama and K. Kawahata",
year = "2010",
month = "1",
day = "1",
doi = "10.13182/FST10-A10830",
language = "English",
volume = "58",
pages = "445--457",
journal = "Fusion Science and Technology",
issn = "1536-1055",
publisher = "American Nuclear Society",
number = "1",

}

TY - JOUR

T1 - Data acquisition and management system of LHD

AU - Nakanishi, H.

AU - Ohsuna, M.

AU - Kojima, M.

AU - Imazu, S.

AU - Nonomura, M.

AU - Hasegawa, Makoto

AU - Nakamura, Kazuo

AU - Higashijima, A.

AU - Yoshikawa, M.

AU - Emoto, M.

AU - Yamamoto, T.

AU - Nagayama, Y.

AU - Kawahata, K.

PY - 2010/1/1

Y1 - 2010/1/1

N2 - The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of realtime streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAM AC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LAB-COM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.

AB - The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of realtime streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAM AC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LAB-COM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.

UR - http://www.scopus.com/inward/record.url?scp=77956705689&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956705689&partnerID=8YFLogxK

U2 - 10.13182/FST10-A10830

DO - 10.13182/FST10-A10830

M3 - Article

AN - SCOPUS:77956705689

VL - 58

SP - 445

EP - 457

JO - Fusion Science and Technology

JF - Fusion Science and Technology

SN - 1536-1055

IS - 1

ER -