High-performance general solver for extremely large-scale semidefinite programming problems

Katsuki Fujisawa, Hitoshi Sato, Satoshi Matsuoka, Toshio Endo, Makoto Yamashita, Maho Nakata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

Semidefinite programming (SDP) is one of the most important problems among optimization problems at present. It is relevant to a wide range of fields such as combinatorial optimization, structural optimization, control theory, economics, quantum chemistry, sensor network location and data mining. The capability to solve extremely large-scale SDP problems will have a significant effect on the current and future applications of SDP. In 1995, Fujisawa et al. started the SDPA(Semidefinite programming algorithm) Project aimed at solving large-scale SDP problems with high numerical stability and accuracy. SDPA is one of the main codes to solve general SDPs. SDPARA is a parallel version of SDPA on multiple processors with distributed memory, and it replaces two major bottleneck parts (the generation of the Schur complement matrix and its Cholesky factorization) of SDPA by their parallel implementation. In particular, it has been successfully applied to combinatorial optimization and truss topology optimization. The new version of SDPARA (7.5.0-G) on a large-scale supercomputer called TSUBAME 2.0 at the Tokyo Institute of Technology has successfully been used to solve the largest SDP problem (which has over 1.48 million constraints), and created a new world record. Our implementation has also achieved 533 TFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

Original languageEnglish
Title of host publication2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012
DOIs
Publication statusPublished - Dec 1 2012
Event2012 24th International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012 - Salt Lake City, UT, United States
Duration: Nov 10 2012Nov 16 2012

Publication series

NameInternational Conference for High Performance Computing, Networking, Storage and Analysis, SC
ISSN (Print)2167-4329
ISSN (Electronic)2167-4337

Other

Other2012 24th International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012
CountryUnited States
CitySalt Lake City, UT
Period11/10/1211/16/12

Fingerprint

Combinatorial optimization
Factorization
Program processors
Quantum chemistry
Structural optimization
Supercomputers
Convergence of numerical methods
Shape optimization
Control theory
Sensor networks
Data mining
Data storage equipment
Economics
Graphics processing unit

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computer Science Applications
  • Hardware and Architecture
  • Software

Cite this

Fujisawa, K., Sato, H., Matsuoka, S., Endo, T., Yamashita, M., & Nakata, M. (2012). High-performance general solver for extremely large-scale semidefinite programming problems. In 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012 [6468521] (International Conference for High Performance Computing, Networking, Storage and Analysis, SC). https://doi.org/10.1109/SC.2012.67

High-performance general solver for extremely large-scale semidefinite programming problems. / Fujisawa, Katsuki; Sato, Hitoshi; Matsuoka, Satoshi; Endo, Toshio; Yamashita, Makoto; Nakata, Maho.

2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012. 2012. 6468521 (International Conference for High Performance Computing, Networking, Storage and Analysis, SC).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Fujisawa, K, Sato, H, Matsuoka, S, Endo, T, Yamashita, M & Nakata, M 2012, High-performance general solver for extremely large-scale semidefinite programming problems. in 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012., 6468521, International Conference for High Performance Computing, Networking, Storage and Analysis, SC, 2012 24th International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012, Salt Lake City, UT, United States, 11/10/12. https://doi.org/10.1109/SC.2012.67
Fujisawa K, Sato H, Matsuoka S, Endo T, Yamashita M, Nakata M. High-performance general solver for extremely large-scale semidefinite programming problems. In 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012. 2012. 6468521. (International Conference for High Performance Computing, Networking, Storage and Analysis, SC). https://doi.org/10.1109/SC.2012.67
Fujisawa, Katsuki ; Sato, Hitoshi ; Matsuoka, Satoshi ; Endo, Toshio ; Yamashita, Makoto ; Nakata, Maho. / High-performance general solver for extremely large-scale semidefinite programming problems. 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012. 2012. (International Conference for High Performance Computing, Networking, Storage and Analysis, SC).
@inproceedings{f4db5d6228ad44d7b9e2e7f86a3240ac,
title = "High-performance general solver for extremely large-scale semidefinite programming problems",
abstract = "Semidefinite programming (SDP) is one of the most important problems among optimization problems at present. It is relevant to a wide range of fields such as combinatorial optimization, structural optimization, control theory, economics, quantum chemistry, sensor network location and data mining. The capability to solve extremely large-scale SDP problems will have a significant effect on the current and future applications of SDP. In 1995, Fujisawa et al. started the SDPA(Semidefinite programming algorithm) Project aimed at solving large-scale SDP problems with high numerical stability and accuracy. SDPA is one of the main codes to solve general SDPs. SDPARA is a parallel version of SDPA on multiple processors with distributed memory, and it replaces two major bottleneck parts (the generation of the Schur complement matrix and its Cholesky factorization) of SDPA by their parallel implementation. In particular, it has been successfully applied to combinatorial optimization and truss topology optimization. The new version of SDPARA (7.5.0-G) on a large-scale supercomputer called TSUBAME 2.0 at the Tokyo Institute of Technology has successfully been used to solve the largest SDP problem (which has over 1.48 million constraints), and created a new world record. Our implementation has also achieved 533 TFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.",
author = "Katsuki Fujisawa and Hitoshi Sato and Satoshi Matsuoka and Toshio Endo and Makoto Yamashita and Maho Nakata",
year = "2012",
month = "12",
day = "1",
doi = "10.1109/SC.2012.67",
language = "English",
isbn = "9781467308069",
series = "International Conference for High Performance Computing, Networking, Storage and Analysis, SC",
booktitle = "2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012",

}

TY - GEN

T1 - High-performance general solver for extremely large-scale semidefinite programming problems

AU - Fujisawa, Katsuki

AU - Sato, Hitoshi

AU - Matsuoka, Satoshi

AU - Endo, Toshio

AU - Yamashita, Makoto

AU - Nakata, Maho

PY - 2012/12/1

Y1 - 2012/12/1

N2 - Semidefinite programming (SDP) is one of the most important problems among optimization problems at present. It is relevant to a wide range of fields such as combinatorial optimization, structural optimization, control theory, economics, quantum chemistry, sensor network location and data mining. The capability to solve extremely large-scale SDP problems will have a significant effect on the current and future applications of SDP. In 1995, Fujisawa et al. started the SDPA(Semidefinite programming algorithm) Project aimed at solving large-scale SDP problems with high numerical stability and accuracy. SDPA is one of the main codes to solve general SDPs. SDPARA is a parallel version of SDPA on multiple processors with distributed memory, and it replaces two major bottleneck parts (the generation of the Schur complement matrix and its Cholesky factorization) of SDPA by their parallel implementation. In particular, it has been successfully applied to combinatorial optimization and truss topology optimization. The new version of SDPARA (7.5.0-G) on a large-scale supercomputer called TSUBAME 2.0 at the Tokyo Institute of Technology has successfully been used to solve the largest SDP problem (which has over 1.48 million constraints), and created a new world record. Our implementation has also achieved 533 TFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

AB - Semidefinite programming (SDP) is one of the most important problems among optimization problems at present. It is relevant to a wide range of fields such as combinatorial optimization, structural optimization, control theory, economics, quantum chemistry, sensor network location and data mining. The capability to solve extremely large-scale SDP problems will have a significant effect on the current and future applications of SDP. In 1995, Fujisawa et al. started the SDPA(Semidefinite programming algorithm) Project aimed at solving large-scale SDP problems with high numerical stability and accuracy. SDPA is one of the main codes to solve general SDPs. SDPARA is a parallel version of SDPA on multiple processors with distributed memory, and it replaces two major bottleneck parts (the generation of the Schur complement matrix and its Cholesky factorization) of SDPA by their parallel implementation. In particular, it has been successfully applied to combinatorial optimization and truss topology optimization. The new version of SDPARA (7.5.0-G) on a large-scale supercomputer called TSUBAME 2.0 at the Tokyo Institute of Technology has successfully been used to solve the largest SDP problem (which has over 1.48 million constraints), and created a new world record. Our implementation has also achieved 533 TFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

UR - http://www.scopus.com/inward/record.url?scp=84877700719&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84877700719&partnerID=8YFLogxK

U2 - 10.1109/SC.2012.67

DO - 10.1109/SC.2012.67

M3 - Conference contribution

SN - 9781467308069

T3 - International Conference for High Performance Computing, Networking, Storage and Analysis, SC

BT - 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012

ER -