TY - GEN
T1 - WiGig Wireless Sensor Selection Using Sophisticated Multi Armed Bandit Schemes
AU - Hashima, Sherief
AU - Mohamed, Ehab Mahmoud
AU - hatano, kohei
AU - Takimoto, Eiji
N1 - Funding Information:
This work was supported by JSPS KAKENHI Grant Numbers JP19H04174 and JP21K14162, respectively.
Publisher Copyright:
© 2021 IPSJ.
PY - 2021
Y1 - 2021
N2 - The broadband merits of wireless gigabit (WiGig) technology motivated its extensive usage in future wireless sensor networks (WSNs) as well as internet of things (IoT) networks in general. A WiGig sensor should select the best nearby one for relaying its collected information, which maximizes its achievable throughput while mitigating the energy consumption. However, the nearby best sensor selection (NBSS) problem needs intelligent solutions that mitigate the resultant beamforming training (BT) overhead. In this paper, with the help of online learning, the NBSS problem is modeled as a stochastic multi-armed bandit (MAB), where the nearby sensor nodes are the arms, and the reward is the received throughput by the player, i.e., the source sensor node. Hence, sophisticated energy aware (EA)-MAB schemes such as perturbed history exploration (PHE) and randomized upper confidence bound (RUCB) algorithms are proposed to handle the matter in real scenarios via updating the residual energies of the nearby sensors during the online selection process. Analytical simulations prove the efficiency of the proposed NBSS schemes over benchmark selection methods in terms of average throughput and energy efficiency.
AB - The broadband merits of wireless gigabit (WiGig) technology motivated its extensive usage in future wireless sensor networks (WSNs) as well as internet of things (IoT) networks in general. A WiGig sensor should select the best nearby one for relaying its collected information, which maximizes its achievable throughput while mitigating the energy consumption. However, the nearby best sensor selection (NBSS) problem needs intelligent solutions that mitigate the resultant beamforming training (BT) overhead. In this paper, with the help of online learning, the NBSS problem is modeled as a stochastic multi-armed bandit (MAB), where the nearby sensor nodes are the arms, and the reward is the received throughput by the player, i.e., the source sensor node. Hence, sophisticated energy aware (EA)-MAB schemes such as perturbed history exploration (PHE) and randomized upper confidence bound (RUCB) algorithms are proposed to handle the matter in real scenarios via updating the residual energies of the nearby sensors during the online selection process. Analytical simulations prove the efficiency of the proposed NBSS schemes over benchmark selection methods in terms of average throughput and energy efficiency.
UR - http://www.scopus.com/inward/record.url?scp=85123925528&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123925528&partnerID=8YFLogxK
U2 - 10.23919/ICMU50196.2021.9638849
DO - 10.23919/ICMU50196.2021.9638849
M3 - Conference contribution
AN - SCOPUS:85123925528
T3 - 13th International Conference on Mobile Computing and Ubiquitous Network, ICMU 2021
BT - 13th International Conference on Mobile Computing and Ubiquitous Network, ICMU 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th International Conference on Mobile Computing and Ubiquitous Network, ICMU 2021
Y2 - 17 November 2021 through 19 November 2021
ER -