%0 Conference Paper %B 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA) %D 2020 %T Asymmetric Resilience: Exploiting Task-Level Idempotency for Transient Error Recovery in Accelerator-Based Systems %A J. Leng %A A. Buyuktosunoglu %A R. Bertran %A P. Bose %A Q. Chen %A M. Guo %A V. Janapa Reddi %K Acceleration %K accelerator errors %K accelerator generality %K accelerator level %K accelerator systems %K accelerator-based systems %K asymmetric resilience %K black box third-party IP components %K checkpointing %K Computer architecture %K discrete systems %K embedded systems %K error-free performance %K multiprocessing systems %K power aware computing %K Reliability %K reliability management %K resilience %K Runtime %K soft-error related faults %K system recovery %K system reliability %K Task analysis %K task-level idempotency %K Transient analysis %K transient error recovery %K voltage-noise %X Accelerators make the task of building systems that are re-silient against transient errors like voltage noise and soft errors hard. Architects integrate accelerators into the system as black box third-party IP components. So a fault in one or more accelerators may threaten the system's reliability if there are no established failure semantics for how an error propagates from the accelerator to the main CPU. Existing solutions that assure system reliability come at the cost of sacrificing accelerator generality, efficiency, and incur significant overhead, even in the absence of errors. To over-come these drawbacks, we examine reliability management of accelerator systems via hardware-software co-design, coupling an efficient architecture design with compiler and run-time support, to cope with transient errors. We introduce asymmetric resilience that architects reliability at the system level, centered around a hardened CPU, rather than at the accelerator level. At runtime, the system exploits task-level idempotency to contain accelerator errors and use memory protection instead of taking checkpoints to mitigate over-heads. We also leverage the fact that errors rarely occur in systems, and exploit the trade-off between error recovery performance and improved error-free performance to enhance system efficiency. Using GPUs, which are at the fore-front of accelerator systems, we demonstrate how our system architecture manages reliability in both integrated and discrete systems, under voltage-noise and soft-error related faults, leading to extremely low overhead (less than 1%) and substantial gains (20% energy savings on average). %B 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA) %P 44-57 %8 Feb %G eng %R 10.1109/HPCA47549.2020.00014 %0 Journal Article %J IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems %D 2020 %T Predictive Guardbanding: Program-driven Timing Margin Reduction for GPUs %A J. Leng %A A. Buyuktosunoglu %A R. Bertran %A P. Bose %A Y. Zu %A V. J. Reddi %K GPU %K Graphics processing units %K Kernel %K Multi-core processors %K Power demand %K Power measurement %K PVT variation. %K single instruction and multiple data %K temperature measurement %K Voltage control %K voltage guardband %K Voltage measurement %X Energy efficiency of GPU architectures has emerged as an essential aspect of computer system design. In this paper, we explore the energy benefits of reducing the GPU chip’s voltage to the safe limit, i.e., Vmin point, using predictive software techniques. We perform such a study on several commercial off-the-shelf GPU cards. We find that there exists about 20% voltage guardband on those GPUs spanning two architectural generations, which, if “eliminated"entirely, can result in up to 25% energy savings on one of the studied GPU cards. Our measurement results unveil a program dependent Vmin behavior across the studied applications, and the exact improvement magnitude depends on the program’s available guardband. We make fundamental observations about the program-dependent Vmin behavior. We experimentally determine that the voltage noise has a more substantial impact on Vmin compared to the process and temperature variation, and the activities during the kernel execution cause large voltage droops. From these findings, we show how to use kernels’ microarchitectural performance counters to predict its Vmin value accurately. The average and maximum prediction errors are 0.5% and 3%, respectively. The accurate Vmin prediction opens up new possibilities of a cross-layer dynamic guardbanding scheme for GPUs, in which software predicts and manages the voltage guardband, while the functional correctness is ensured by a hardware safety net mechanism. %B IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems %P 1-1 %G eng %R 10.1109/TCAD.2020.2992684