Research Interests
Computer architecture, especially instruction-level parallel (ILP)
processing
Compiling for ILP machines
Memory systems
My main research objective is to develop the highest performance processor
of a current generation,
along with supporting memory systems,
so as to reduce the completion time of a single computation task.
Most of my research work focusses on achieving this goal
with the use of instruction-level parallel (ILP) processing techniques
that take advantage of rapidly changing technologies.
These techniques include (i) the use of high-level control flow prediction
and multiple flows of control to deal with control dependencies,
(ii) the use of data value prediction and multiple instruction issue
to deal with data dependencies,
and (iii) the use of optimizing compilers to make best use of
different microarchitecture features.
One aspect of my research has concentrated on decentralizing different parts
of ILP processors to improve performance, scalability, and fault tolerance.
This includes decentralizing
(i) the top level of the memory hierarchy with the use of multi-banked,
non-blocking caches, and
(ii) the instruction scheduling hardware with the use of
the multiscalar execution model
(which I had developed as part of my PhD work
and has inspired over half a dozen major multi-threading research projects
worldwide),
and with the PEWs execution model.
My current research addresses each of these three issues, and serves as
an integral part of a long term, comprehensive research program to develop
scalable, decentralized, high-performance ILP processors.