
An interview with Gurindar Sohi, recipient of the 2025 Computer Pioneer Award
Gurindar (Guri) Sohi, Vilas Research Professor, John P. Morgridge Professor, and E. David Cronon Professor of Computer Sciences, Computer Science Department, University of Wisconsin-Madison, Madison, Wis., U.S.A., has remained in the same office at the university since 1987 – almost 40 years. He jokes that it even still has some of its original furnishings, like the carpet. But what he does not make light of is the opportunity Wisconsin provided him to ideate and produce the innovations in computer architecture that revolutionized the field.
“Wisconsin, in some people’s eyes, is in the middle of nowhere,” said Sohi. “While the university has always been a top-notch university, we are very far from the mainstream computer industry. As such, we were not constantly subject to what industry was doing. Normally when you are subject to that, you tend to think ‘this is the way, this is right.’ But because we were somewhat isolated from industry influence, we were able to ask the questions, ‘What if? What if you can do this with hardware?’ Being sufficiently removed from what was in vogue and having the luxury of time and freedom to think about what one could do was very beneficial.”
This year, the IEEE Computer Society celebrates Sohi’s achievements through the 2025 IEEE Computer Society Computer Pioneer in Honor of the Women of ENIAC Award, which was given for his contributions to the microarchitecture of instruction-level parallel processors and his impact on the computer architecture community. The IEEE Computer Society recently interviewed Sohi to discuss his career, what’s developing, and what it means today. The following summary dives into Sohi’s thoughtful consideration of the field and his advice for the future.
You have had a profound impact on the advancement of computer architecture. Where did it all begin?
I came to the United States from India in 1981. I had an opportunity to attend graduate school at the University of Illinois thanks to Professor Ed Davidson. At that time, I never dreamed I’d be able to do what I did. I barely knew anything about computers. But I was fortunate to be given an opportunity, an opportunity to learn from some amazing faculty at the University of Illinois, who grew my interest in computer architecture and made me realize that was what I wanted to do.
The faculty at the University of Illinois had a significant impact on your early love for computer architecture. Who else inspired and influenced you throughout your career?
When I joined the faculty of the University of Wisconsin-Madison in 1985, I was fortunate to be able to meet James E. Smith, now a Professor Emeritus in the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison, and an integral early figure in the field of computer architecture. Through him, I learned about supercomputer architecture and high-performance architecture techniques. He infused excitement in me about those techniques and I became fascinated with being able to develop new techniques to do instruction execution out of order in hardware transparent to software.
But I was in my mid-twenties, and I doubted very much what I was doing. This was where Smith’s mentorship became very important. He understood that the approaches of that time, which relied heavily on doing things in software, were not going to be practical or even effective for a variety of reasons. He saw immense value in the work I was doing and his encouragement was very reassuring.
How revolutionary was the work you were doing at that time?
In the mid-to-late 80s, the thinking at the time was that hardware needed to be simple and the heavy lifting should be left to software. People understood the need for instruction-level parallelism — doing more than one instruction at a time — and the need for speculative execution. But the thinking at that time, shared by many of the leaders in computer architecture in academia as well as industry, was that complexities of doing this should be the responsibility of software and that hardware should be very simple. Building complex hardware was something that was not considered to be practical, desirable, or even a path that was potentially beneficial.
We developed a model for instruction-level parallelism in the hardware, where the instructions were executed out-of-order and speculatively, yet gave the illusion to software that instructions were being executed sequentially and in order. Software didn’t have to do anything — it just ran faster.
There were some who understood that having the software do all the work for instruction-level parallelism and speculative execution would not lead to a practical, longer-term solution. They started taking interest in our approach. Available transistor resources had increased, allowing more complexity to be implemented in hardware, and people started to see the commercial disadvantages of the software approach. People in some microprocessor companies started to take an interest, realizing that this was a way of building processors that would provide increased performance to their customers, without their customers having to alter their software. Soon after the initial adopters, other microprocessor companies followed and the approach became ubiquitous. Once the hardware approach to the initial speculative and out-of-order execution of instructions had become established, we developed techniques for other forms of speculative execution, such as memory dependence prediction, which have also become ubiquitous.
What challenges did you face?
The hard part is producing the ideas and finding people to believe in them. When we did our initial work, I was a young, insecure professor doing something really no one else had and I didn’t have the confidence. I honestly did not think my work would have such an impact, especially listening to the leaders in the field and seeing how industry appeared to be going. In my gut, I felt like it could have an impact, but the arguments against it were so strong and just about everyone was going the other way.
But then some started asking questions, trying to understand the technique. At that point, I felt we had something and that built more confidence. And slowly but surely we started to realize that this is the way that general-purpose microprocessors would likely be architected in the future.
In hindsight, the biggest challenge was overcoming the mindset that was prevalent: the keep hardware simple approach was easy to understand, people were comfortable with it, and most of the leaders were championing it. Why consider something different and new that you haven’t seen before and you don’t understand?
What do you see further down the line for computer architecture? What do you think the future holds?
Today, much of the conversation in computing is about AI. The insatiable demand for GPUs and other hardware, construction of lots of data centers and power generation for them, etc. Naturally, most people want to jump on this bandwagon. But let us not forget that there are millions of applications that run on general-purpose processors, in datacenters, desktops/laptops, phones, and other devices. Achieving greater capabilities for general-purpose processors is going to continue to be important to provide greater capabilities to applications and to enable new ones. Further, energy-efficient computing, especially for AI, is going to be paramount, and computer architecture is at the center of this.
With your ground-breaking career as a backdrop, what advice do you have for future leaders in the field?
Of course, mastering the current techniques is very important, but to innovate, you need to keep an open mind. Never just accept this is the way something is done. If you accept that, then you naturally get into an incremental mode of simply trying to improve the way something is currently accomplished. I would urge the future leaders to sit back, let your mind run, and think. That is how innovation will likely come about.
The demand for computing is going to continue to increase and many new applications of computing will arise. I challenge future leaders to determine how best to produce computing devices and techniques to keep the computing revolution going. And you can get great satisfaction knowing that the products of your work are going to be used by billions of people every day.
Gurindar (Guri) Sohi is Vilas Research Professor, John P. Morgridge Professor, and E. David Cronon Professor of Computer Sciences, Computer Science Department, at the University of Wisconsin-Madison.