«Dynamic Assertion-Based Veriﬁcation for SystemC by Deian Tabakov A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree ...»
Dynamic Assertion-Based Veriﬁcation for SystemC
A Thesis Submitted
in Partial Fulfillment of the
Requirements for the Degree
Doctor of Philosophy
Approved, Thesis Committee:
Moshe Y. Vardi, Chair
Karen Ostrum George Professor
in Computational Engineering
Assistant Professor of
Electrical and Computer Engineering
and Computer Science
Luay Nakhleh Associate Professor of Computer Science and Biochemistry and Cell Biology Houston, Texas December, 2010 ABSTRACT Dynamic Assertion-Based Veriﬁcation for SystemC by Deian Tabakov SystemC has emerged as a de facto standard modeling language for hardware and embedded systems. However, the current standard does not provide support for temporal speciﬁcations. Speciﬁcally, SystemC lacks a mechanism for sampling the state of the model at diﬀerent types of temporal resolutions, for observing the internal state of modules, and for integrating monitors eﬃciently into the model’s execution. This work presents a novel framework for specifying and eﬃciently monitoring temporal assertions of SystemC models that removes these restrictions.
This work introduces new speciﬁcation language primitives that 1) expose the inner state of the SystemC kernel in a principled way, 2) allow for very ﬁne control over the temporal resolution, and 3) allow sampling at arbitrary locations in the user code. An eﬃcient modular monitoring framework presented here allows the integration of monitors into the execution of the model, while at the same time incurring low overhead and allowing for easy adoption. Instrumentation of the user code is automated using Aspect-Oriented Programming techniques, thereby allowing the integration of user-code-level sample points into the monitoring framework.
While most related approaches optimize the size of the monitors, this work focuses on minimizing the runtime overhead of the monitors. Diﬀerent encoding conﬁgurations are identiﬁed and evaluated empirically using monitors synthesized from a large benchmark of random and pattern temporal speciﬁcations.
The framework and approaches described in this dissertation allow the adoption of assertion-based veriﬁcation for SystemC models written using various levels of abstraction, from system level to register-transfer level. An advantage of this work is that many existing speciﬁcation languages can be adopted to use the speciﬁcation primitives described here, and the framework can easily be integrated into existing implementations of SystemC.
Acknowledgments I would like to extend my sincere gratitude to my adviser Professor Moshe Y. Vardi.
His guidance and support have been invaluable and his belief in me unfaltering. I cannot ever hope to repay the time and eﬀorts that he devoted to my growth both academically and as a person; I can only hope that I will have the opportunity to pay this debt forward to a colleague or a student.
I would like to thank Dr. Kartik Mohanram and Dr. Luay K. Nakhleh for serving as members of my dissertation committee. Their questions, comments and suggestions have helped me distill the arguments and clarify the exposition. I appreciate the time and eﬀorts that they put in reading and evaluating my dissertation and presentations.
I also thank Eli Singerman and Gila Kamhi for hosting me during my internship with Intel’s Design Technology and Solutions Division. Their insightful comments and many discussions about this work have helped me understand the speciﬁcation needs of the actual practitioners of dynamic veriﬁcation. I would also like to acknowledge that this work was supported in part by a grant from the Intel Corporation.
I owe thanks to all past and present students in the Veriﬁcation and Algorithms research groups who helped me ﬁnd my bearings in the department. I have had many stimulating discussions with Seth Fogarty, Sumit Nain, Guoqiang Pan, Ben McMahan and Kristin Y. Rozier.
I owe special thanks to Ioan Sucan for his help untangling thorny C++ issues and ¸ for the countless hours we have spent in discussion and exchange of ideas.
Last but not least, I would like to thank my parents, Evelina and Todor, and my wife Linh, for their support and encouragement.
1.1 Design crisis One logarithmic graph with 4 data points on a straight line led to the following
observation in a 1965 issue of the Electronics magazine:
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year [... ] Certainly over the short term this rate can be expected to continue, if not to increase. [Moo65] The author of these now famous words was Gordon Moore, a chemist and a physicist, and co-founder of Intel Corp. 10 years later, Moore updated his prediction, stating that “the new slope might approximate a doubling every two years, rather than every year.” [Moo75] In the decades since, Moore’s Law has been widely misunderstood and often misquoted, and has been invoked when discussing capacities of dynamic RAM modules and hard disk drives, the computational power of microchips, and the number of megapixels in consumer digital cameras. In a 1995 speech, Gordon Moore looked
back over the 30 years since he ﬁrst stated his famous “law”, and joked:
The deﬁnition of “Moore’s Law” has come to refer to almost anything related to the semiconductor industry that, when plotted on semi-log paper, approximates a straight line. [Moo95] Part of the reason for the popularity of Moore’s Law is its uncanny accuracy.
Indeed, if we plot the number of transistors on commodity processors versus time, starting with the early designs in 1960 and going through the latest microprocessor from the Itanium family1, the curve is clearly exponential. However, as our ability to put more transistors on a die has increased, so has the complexity of the designs.
Addressing the increasing complexity has led to an increasing level of abstraction in designs.
In the 1960s and 1970s it was possible to design circuits at the transistor level. As the circuits grew bigger, designers started thinking in terms of gates. In the 1990s, tools for automatic synthesis of gate-level designs allowed the practitioners to adopt an even higher level of abstraction: Register Transfer Level (RTL). This move was widely acclaimed as a revolution of the design process [Fos08].
However, in the 2000s a new type of devices started to become more commonplace, for example, cell phones, network routers, and smartcards, collectively called systemson-chip (SoC). A SoC often consists of more than one processor, and contains on-chip and external memory, analog and digital signal processors, peripheral devices, and complex on-chip buses. Such systems are often capable of running a modern operating system such as GNU Linux or Windows Mobile. The software and the hardware of such systems are tightly connected, with the software both controlling the hardware, and requiring it to run. Designing and debugging software for such systems is a diﬃcult task by itself, usually requiring many months and even years. Shrinking time-to-market made it all but impossible to wait for the hardware to be designed, taped out, and for the ﬁrst prototypes to roll oﬀ the production ﬂoor, before starting to design the software. Instead, there is increasing pressure to co-design the hardware and the software together, a task not well suited for RTL.
Another problem with RTL is that a micro-architecture is usually speciﬁed by a natural language document, referred to as the micro-architectural speciﬁcation (MAS) [Var07]. It is increasingly diﬃcult to design RTL from such informal and At the time of writing of this thesis, the latest Intel microprocessor was Tukwila, with more than 2 billion transistors on a single die often underspeciﬁed documents. Moreover, if there are errors in the RTL, it is almost impossible to determine whether the error was in the original design, or in the translation of the design from MAS to RTL, as the RTL serves as both the microarchitectural model and its implementation [Var07].
The divergence between what engineers could put on the chip and what the designers could simulate and verify, has been increasing, leading Gadi Singer, one of Intel’s Vice Presidents, to issue a call to action for the electronic design automation (EDA) industry:
We have a very good ﬂow generally to go down from RTL to layout. However, RTL was established two decades ago. Since then, complexity has grown severalfold and the EDA industry has not provided a similar level of support for design and reﬁnement at the next level. This is a level that is an absolute requirement to deal with the growing complexity [Goe05].
The need to move beyond the register transfer level motivated academic and industrial research into language for “system-level” design. A new language would ideally support speciﬁcation and design at various levels of abstraction, incorporation of embedded software, and creation of executable speciﬁcations. One of the most popular languages that were intended to answer the needs of the design community is SystemC.
SystemC (IEEE Standard 1666-2005) is both a modeling and a simulation language:
designs can be created with various levels of details, and then they can be compiled into executables using standard C++ compilers2. The core language consists of macros for modeling fundamental components of hardware designs, such as modules and signals. SystemC also provides hardware-oriented data types like 4-valued logic A detailed introduction to SystemC follows in Chapter 2.
and arbitrary-precision integers. The implementation of SystemC distributed by the Open SystemC Initiative (OSCI) also provides a reference simulator that is linked to the user code and drives the simulation. Various vendors supply their own simulators, which provide additional support for debugging and co-simulation with other hardware description languages.
One of the reasons for SystemC’s success is that it allows designers to model systems at several abstraction levels, from the most concrete (gate level) through the most
(system level) [GLMS02]. Individual components can be replaced freely without aﬀecting the rest of the design, thus allowing the engineers to investigate alternative approaches and new ideas. This also allows the design process to be parallelized, with diﬀerent teams working on diﬀerent components simultaneously.
As a simulation framework, SystemC provides an event-driven environment, where important occurrences like writing to a signal, a clock tick, or a token being consumed from a channel are each represented by an event. Synchronization between events and processes is done behind the scenes by SystemC’s simulation kernel. The kernel keeps track of events, schedules processes to run, and updates the values of signals and channels in a fashion that mimics concurrent execution, even though in reality the processes are run sequentially. The seamless integration between hardwarelike components (modules, signals) and software (processes) makes SystemC a prime choice for prototyping and testing hardware and hybrid systems early in the design process [CA02].
The object-oriented encapsulation of classes in C++ is naturally extended in SystemC to protect each module’s internal data members (representing local memory) from other modules and processes, except through explicitly deﬁned interfaces. C++’s inheritance capabilities allow for the creation of modular designs in SystemC, which, in turn, facilitate reuse and make IP transfer possible [BGM04].
Various libraries provide further functionality. For example, a popular library called TLM (short for Transaction-Level Modeling) deﬁnes channels, interfaces, and protocols that streamline and standardize the development of high-level models in which complex communication and protocols are reduced to a single “transaction”.
These factors have helped propel SystemC as a de facto industry-wide standard modeling language, less than a decade after its ﬁrst release, and with this increase in use came the increasing need to verify the designs written in SystemC.
1.3 Design veriﬁcation
Before we can check if the behavior of a design is correct, we need to have a “yardstick” against which the behavior is compared. Behavior of the design over time is often expressed using temporal formulas [Pnu77]; in his dissertation such temporal formulas are called properties of the design. A speciﬁcation is a set of asserted properties, which describe intended behavior of the system. Informally, the goal of design veriﬁcation is to ensure that the actual behavior of the design is consistent with its speciﬁcation.
If an inconsistency is discovered, the veriﬁcation tool can usually produce a witness of the violation: a trace which corresponds to an execution of the model and which
violates the speciﬁcation. There are two major directions of research in this ﬁeld:
formal veriﬁcation and dynamic veriﬁcation.
1.3.1 Formal veriﬁcation
One feature that all formal veriﬁcation methods share is that they produce a mathematical proof that the design can never violate the speciﬁcation. If the design is not written in a formalism with formal semantics, it needs to be translated before model checking can be applied. Practical application is limited to small blocks that contain mostly control logic such as state machines, as opposed to blocks that are used to transform data, such as multipliers [CH07].
One common application of formal veriﬁcation is in equivalence checking: showing that two models have the same behavior. Usually it is done after a reﬁnement step or after some optimizations (e.g., clock-tree synthesis) to ensure that the functionality of the model or the circuit is still correct [Ber03]. In these cases the old model serves as a speciﬁcation for the new one. Equivalence checking can also be used to verify the correctness of the output of the synthesis tool, for example, when synthesizing netlists from RTL.