TY - JOUR
T1 - ARCH-COMP 2020 category report
T2 - 7th International Workshop on Applied Verification of Continuous and Hybrid Systems, ARCH 2020
AU - Ernst, Gidon
AU - Arcaini, Paolo
AU - Bennani, Ismail
AU - Donze, Alexandre
AU - Fainekos, Georgios
AU - Frehse, Goran
AU - Mathesen, Logan
AU - Menghi, Claudio
AU - Pedrielli, Giulia
AU - Pouzet, Marc
AU - Yaghoubi, Shakiba
AU - Yamagata, Yoriyuki
AU - Zhang, Zhenya
N1 - Funding Information:
Acknowledgments P. Arcaini and Z. Zhang are supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. Funding Reference number: 10.13039/501100009024 ERATO. The report on falsify is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). The ASU team (S-TaLiRo) was partially supported by NSF CNS 1350420, NSF CMMI 1829238, NSF IIP-1361926 and the NSF I/UCRC Center for Embedded Systems. C. Menghi is supported by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant No 694277). zlscheck was funded by the ModeliScale Inria Project Lab.
Publisher Copyright:
© 2020 EasyChair. All rights reserved.
PY - 2020
Y1 - 2020
N2 - This report presents the results from the 2020 friendly competition in the ARCH workshop for the falsification of temporal logic specifications over Cyber-Physical Systems. We briefly describe the competition settings, which have been inherited from the previous year, give background on the participating teams and tools and discuss the selected benchmarks. The benchmarks are available on the ARCH website1, as well as in the competition’s gitlab repository2 . In comparison to 2019, we have two new participating tools with novel approaches, and the results show a clear improvement over previous performances on some benchmarks.
AB - This report presents the results from the 2020 friendly competition in the ARCH workshop for the falsification of temporal logic specifications over Cyber-Physical Systems. We briefly describe the competition settings, which have been inherited from the previous year, give background on the participating teams and tools and discuss the selected benchmarks. The benchmarks are available on the ARCH website1, as well as in the competition’s gitlab repository2 . In comparison to 2019, we have two new participating tools with novel approaches, and the results show a clear improvement over previous performances on some benchmarks.
UR - http://www.scopus.com/inward/record.url?scp=85121575509&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85121575509&partnerID=8YFLogxK
U2 - 10.29007/trr1
DO - 10.29007/trr1
M3 - Conference article
AN - SCOPUS:85121575509
SN - 2398-7340
VL - 74
SP - 140
EP - 152
JO - EPiC Series in Computing
JF - EPiC Series in Computing
Y2 - 12 July 2020 through 12 July 2020
ER -