With the slowing of technology scaling, the only known way to further improve computer system performance under energy constraints is to employ hardware accelerators. Already today, many chips in m obile, edge and cloud computing concurrently employ multiple accelerators in what we call accelerator-level parallelism (ALP). For the needed benefits of ALP to spread to computer systems more broadly, we herein charge the community to develop better “best practices” for: targeting accelerators, managing accelerator concurrency, choreographing inter-accelerator communication, and productively programming accelerators.
The Edge Computing Lab resides in the John A. Paulson School of Engineering and Applied Sciences at Harvard University under the direct supervision of Prof. Vijay Janapa Reddi.
Edge computing is a computing paradigm where computation is performed mostly on power-constrained devices with limited processing capability, or on-premise datacenters. Edge systems only loosely depend on large-scale cloud computing resources as needed (i.e., via computational offloading). In the edge computing context, our interests span from the low-power Internet of Things (IoT) devices to mobile computing devices like smartphones, all the way up to high-performance edge servers for various application domains like autonomous cars and robotics.
We are a team of computer system architects who specialize in edge computing platforms. Our work spans both hardware and software to solve fundamental computing challenges in the context of edge computing and its various endpoints. We are specifically good at understanding the interactions across the circuits, architecture and software layers. Our philosophy is to be driven by the problems we want to solve, rather than by the medium (hardware or software) through which we can solve the problems. Hence, we are only limited by our ability to creatively address the problems and not by our apriori knowledge of a particular problem domain.