Semiconductor Engineering
Does RISC-V processor verification provide common ground to develop a new verification methodology, and will that naturally lead to new and potentially open tools?
Brian Bailey, Semiconductor Engineering
April 27, 2021
Experts at the Table: Semiconductor Engineering sat down to discuss what open source verification means today and what it should evolve into, with Jean-Marie Brunet, senior director for the Emulation Division at Siemens EDA; Ashish Darbari, CEO of Axiomise; Simon Davidmann, CEO of Imperas Software; Serge Leef, program manager in the Microsystems Technology Office at DARPA; Tao Liu, staff hardware engineer in the Google chip implementation and integration team; and Bipul Talukdar, director of applications engineering for SmartDV. This is an adaptation of a panel session held at DVCon. Part one can be found here.
SE: Open source enables collaboration. Until now, no two companies have used the same verification methodology. RISC-V may be the first time that enough people have been working on a common problem to be able to devise a solution within the verification space.
Davidmann: If we focus on RISC-V. The challenge is that for the past 50 years, processors were built very secretly. You never shared how you designed it or how you verified it. Suddenly, here we are in the last five years, as everyone gets on the RISC-V bandwagon, everybody has become a computer architect, everybody’s got an architecture license, but they don’t have the verification methodology or understanding. There are proprietary solutions hidden away in the big processor vendors. We are building methodologies. We’re trying to work with people on standard interfaces so that reference models can be used in a standard way with test benches. There is a dramatic change that’s come about because of the open nature of RISC-V. It’s not just about building the tools and the methodologies. It’s supporting, maintaining, and developing it. What the EDA industry has achieved over the past 30 years is absolutely phenomenal. I don’t buy that open source is the solution for verification. What we need is better verification. We need to move forward, and it’s great that people are looking at things like Python and other solutions. AI is going to help us a lot in verification. We just don’t know it yet. And there’s a lot to be done. It’s not the cost of the software. The cost is in your people, in your brains, and doing smart things with it.
Talukdar: I agree. There is really no open-source methodology out there. There is open-source design, but there is no open-source verification methodology. So everybody is doing their stuff, building those nuts and bolts for open-source verification, and they are charging money for that, of course. But there is no methodology, in general. Everybody has their own way of doing things for verifying open-source designs. That’s where the industry needs to come together and build a methodology. For example, doing simulation verification, people built UVM. Now for the open-source designs what is unified about this verification? I think that’s the problem to solve all together.
Darbari: We should not overlook the fact that we have SAIL. It is an open-source, industry led consortium for formally specifying the instruction set architecture and behavior. It is certainly a step in the right direction. I don’t believe it adequately addresses all of the concerns that are needed for verifying RTL, but it does provide a specification document, which is formally specified, and formally investigated. Google has been leading from the front end, helping companies like Imperas improve their solutions. In the formal space, we have vendors that are building tools and then there are companies trying to provide vendor lock-in free world of tools and services. I also take offense to the notion that EDA has not innovated. In 1988, you couldn’t formally verify a 1.1 billion gate design.
Leef: I feel compelled to respond. In 1988, it took 30 machine instructions to evaluate a single event and click the clock in an event-driven simulator. Today in 2021, it takes 30 machine instructions to evaluate an event and click the clock. In 1988, early users of Synopsys logic synthesis shifted to RTL. Today in 2021, RTL-based design is still the primary methodology. The abstraction has not moved up. Simulation, which is the topic of this panel, has not measurably improved. There are some open-source technologies like Verilator and Qemu that the community finds adequate. Methodology is where the differentiation should lie within different organizations, and pardon my disrespect for formal, it is not a widely used technology.
Talukdar: How we can build a closed-loop verification methodology for formal? How do we quantify the verification and close the loop? Think about the verification flow when somebody makes modifications to a core. In addition, the open-source core is being updated, and with that kind of continuous update, we need a quantitative measure to close the loop. How do you do that? Every time you update something, and look at the new coverage numbers, you may have some sense of what you did not cover, where you need to write more tests. You need the capability to do that kind of stuff. Formal is a technology that completely analyzes the whole design and can tell you something in the quickest possible way. Having formal coverage can help us bring in a closed loop verification methodology.
Darbari: We have a formal verification test bench, which can exhaustively verify pipelined processors within hours. If you were building microprocessors today and you are not using formal, then something is going to go wrong. It would be hard for simulation alone to find all the bugs.
Leef: An objective metric of success for formal verification is business. How much revenue did all the formal verification tools generate last year in the context of a $9 billion EDA market? I assert the fraction is tiny, which to me means it’s not widely adopted.
Brunet: Without EDA you don’t have the advanced semiconductors of today. A lot of people were talking about the collapse of Moore’s Law. OPC was able to save the mask making process and we are doing 7, 5 and 3 nanometer today. You have designs that are around 10 billion gates, that are verified with emulation. There have been key developments that were done by the EDA community. There is innovation in EDA, and it continues — maybe less visibly than a couple of years ago because there is more maturity within the big three companies, but there is a tremendous amount of innovation.
SE: Going back to open source, why do we think we need open source? What is the main driver for the industry to consider open source?
Brunet: It is not about cost reduction. It’s about exchange, it’s about interoperability, and it’s about building a community to be able to exchange more information in a controlled fashion. It’s interesting that we have representation from DARPA on the open-source community. When I think open source, I don’t think DARPA.
Davdimann: I have two comments. The first is DARPA probably has a bigger budget than everybody on this panel. They have a huge budget, and yet you’re the one saying you want everything for free. And the second thing is you’re trying to promote open source. I would have thought the American government, and the DoD, wouldn’t want the world to have the latest greatest bits of technology that American companies are building, or you are funding. I thought you’d want to be secretive and build proprietary solutions — not open source. The key thing is it is about freedom. People want the source of something so that they can change it and make it work for what they want to do.
Liu: I want to go back to the methodology alignment opportunity because this is a very challenging problem. What’s the difference between an open-source verification project compared with your internal project? The difference is that you’re going to face people from 100 different companies, with different requests. We should be thinking about building an ecosystem because unifying the methodology within a company is already challenging enough. When you come from 100 different companies, it’s almost impossible to do that alignment. The motivation should be driving an ecosystem that allows people with different methodologies to adapt your solution. Then we’ll create more interest, and less pain for them to integrate. When we were considering making this open source, people asked what we get out of that? We get a lot of contributions from the community, like bug reports, code contributions, feature requests, everything I consider to be very valuable contribution. If you are making an ecosystem, you benefit from a very large team where the whole community is giving you feedback. That’s the beauty of open source. For open-source hardware development and verification, we do not have a good collaboration culture compared to the software world. We need to build that culture. When everybody can contribute to this work, then we can make it better. And you actually get more out of it, compared to what you can achieve with two or three engineers internally in a company.
Brunet: Benchmarks for specific vertical markets are considered open source. If you look at machine learning and AI, there is MLPerf. A lot of reference benchmarks are open, and it works well. It’s enabling freedom in the ecosystem to compare. The difference is really the design implementation.
Darbari: When donating code, be it an instruction generator or a full test bench, you can do it if that’s not your bread and butter. But if you’re making money by actually commercializing your offering, like the EDA guys are doing, we are trying to sell tools, and our life depends on it. We literally cannot donate it for free. We would like to engage with the community and work with them in the spirit of transparency and openness and collaboration. We are interested in exploring the coverage story collaboratively for formal. But if it’s your only business, cost does come into the equation. It’s not just collaboration.
Davidmann: There are a lot of companies wanting to get access to our technology. We are a member of several different industry organizations. The OpenHW Group has a mission to build open-source RISC-V processors. They want them to be commercial quality cores, and the way that’s done is by using best-in-class verification solutions. They use Verilog simulators, SystemVerilog/UVM. It uses the Google generator, it uses our reference model, and it will use any technology it can to build the best quality open-source cores. It has been said that Verilator and Qemu are adequate. When you’re building high quality, adequate is completely unacceptable. You need best-in-class. Nobody builds a state-of-the-art SoC using rubbish tools. They use the absolute best. That’s why it’s a $9B industry, because people know that to get their chip to work, they need the best technology. When you say a tool is adequate and free, universities use adequate tools because they don’t have the money to buy all the best tools. But universities aren’t building quality products. No one gets a Ph.D. for building a robust ecosystem around a bit of RTL they’ve written. They are exploring new architectures and extensions. When it comes to verification, you have to use the best that’s available. Whatever it costs, you want the best-quality cores, verified to the same level of quality that you’d find in a big commercial company. Open source gives you freedom. That’s fundamental. I don’t think it saves you money.
SE: There is an interplay between tools and methodologies. Are we at the point where developing new methodologies involves extending the existing tools or developing new ones? Do we have the opportunity to develop some new methodologies that naturally would lead to the creation of some new tools?
Leef: When I was in EDA, we had a large semiconductor company as a customer who had tens of thousands of servers running simulation around the clock. After every change they were reverifying their chips. The way they did this was to slice up the test suites. It was the same design being simulated on 10,000 nodes, but the test suites that were running were different. To support this operation, the company had a massive data center and employed close to 100 engineers to support the methodology. That involved cutting things up, stitching them together, and presenting the results to the design engineers in the morning. This is an obvious opportunity for automation. The cloud has infinite compute and storage, and there’s no reason that you could not automate this. That is an example of a methodology being turned into a tool in the cloud context. Also, not everybody does the state-of-the-art 10 billion-transistor chip. Within the U.S. defense community, chips like that do not exist. There are research labs that are doing federally funded research and none of them are scratching on the type of complexity that you are referring to. Those represent an underserved market. If you really want to find revenue, you should be looking at how to please those people, as opposed to pleasing the tier-one semiconductor companies.
Darbari: We did exactly what you said in the question. We built a brand-new coverage solution that would run on top of any formal verification tool. It will give you evidence and scenarios that instructions are actually running. It is an extension, and it builds on the technology supplied by the formal verification tool vendors. We are certainly trying to make contributions that way.
Talukdar: There can be open tools such as Verilator. But you still need the nuts and bolts for it. You still need verification IP. There are multiple components to the total solution. The ecosystem needs to evolve and building a methodology for processor verification will have multiple benefits, like opening tools, building the nuts and bolts for the tools, and how to monetize them. It is a hard problem but that needs to be explored as well.
Lui: The opportunity for new methodologies is important. Constrained random has been around for 15 years or more. UVM/SystemVerilog is the standard solution in a simulation world. But things are changing, and when we do open-source work, you often get surprised. I was surprised when somebody used my platform to do machine learning based verification. There are people looking at Python-based constrained random. There is Chisel-based design verification. I think we need to be open minded to new design and verification methodologies and languages. This is where traditional commercial EDA tools have not reached, so there is an opportunity there. Looking ahead, we should jump from constrained random and look for something else.
Davidmann: I recently watched a paper about the use of Python as a UVM verification solution sitting on top of an HDL simulator. That’s making use of UVM being open. Python is open. The big EDA vendors are absolutely looking into this, and they are trying to find the best solutions for their customers. Then there is C++ as a modeling technology. You don’t have to buy a C compiler. It’s the same with SystemC. Big EDA does not have its head in the sand. It is looking at the way the future is changing and trying to find good solutions.
This article was originally published in Semiconductor Engineering