Connecting Enterprise Applications To Metric Driven Verification
Connecting Enterprise Applications To Metric Driven Verification
Connecting Enterprise Applications To Metric Driven Verification
Driven Verification
Author: Matt Graham, Cadence Design Systems, Inc, Ottawa Canada mgraham@cadence.com
Abstract The DV community has been defining, applying and extending the concept of Metric Driven
Verification methodology for some time. Its application at the IP/sub-system level is commonplace, and its expansion
into the SoC integration level is ongoing. This expansion is thanks in no small part to advancements in EDA tools to
manage the considerable amounts of metric (coverage, regression, etc) data generated by EDA tools (simulation,
emulation, formal, etc). While these tools can help with the metrics that are directly generated by EDA, they often
lack any meaningful integration with other Enterprise Applications such as defect tracking, requirements
management, and source code changes and revision control, to name just a few.
As EDA enhances their MDV applications to take advantage of database structures, and multi-user collaboration
based architectures, as many of the Enterprise Applications mentioned above already have, an opportunity exists for
tighter integration between those management systems. If the example of defect tracking is considered, a number of
integration flows may be of interest. At the first order level, straightforward flows such as opening a defect report
based on a regression failure, or associating sets of regression failures with existing defect reports. Second order flows
would include rerunning any regression failures upon resolution or a defect report. A third order of integration
would entail marking that a version or release of a design and testbench combination is in debug mode. Such a
marking would indicate to the regression flow to not exercise any regressions on that version. This paper will describe
in detail these flows, and their associated technical implementation with examples using specific industry standard
tools.
Beyond these relatively obvious integrations lies the opportunity for analytics that move beyond transactional
type operations into more abstract data analysis. One example is the analysis of coverage information collected for a
specific set of regression failures associated to a specific defect report. This coverage correlation could then be
exploited to seed further regressions that focus on the specific coverage areas to attempt to flush out more defects in a
specific solution space. This paper will explore these types of integrations and discuss the technical requirements and
challenges for their implementation.
KeywordsMetric Driven Verification, Functional Verification, EDA, Enterprise Applications, REST, ALM
I. INTRODUCTION:
Verification engineers continue to manage ever increasing amounts of data, in the forms of regressions,
coverage, requirements, defect and release information. Each of these silos of information is of particular use and
interest to the verification team, and each are an integral part of the closure decision. To this point, however,
none have been particularly well integrated into a single reporting mechanism. Each form of data has a number
of options in terms of applications for storing and reporting, but the aggregation of such information remains, for
the most part, a manual process.
1
somewhat mechanical manner. In the worst case, engineers are forced to cut and paste or manually massage the
data for exchange.
2
test marked as being associated with a specific defect starts failing (again), a defect state could be updated to
reflect this (e.g. the state moved from resolved to open).
3
Shown below is the TCL source required to call the Bugzilla API (vm_bz_button) script and pass data to/from
vManager. This TCL code can be utilized for either interacting with Bugzilla thru a comma separated value
(CSV) interchange, or can be used to interface utilizing the REST API discussed earlier.
2) Debug Based Trigger for Bug Tracking Synchronization Flow (Figure 2). For many users, new bugs are
not openned until after a more advanced debug process occur. In this case, the user would not open a new bug
ID until after the root cause has been determined by additional debug tools. It is commonplace that a specific
defect might manifest itself in the failure of a number of runs. In this case, its not interesting to open a defect
report for each individual failure, but rather to append the properties of the additional failures to the defect
report (and report the existing defect report number to the run information in vManager). To perform his task,
the user selects a run from the Runs Analysis window in vManager, and clicks the Append Bug button. The
user is then prompted for the existing defect number. A lookup in the Bugzilla database is peformed, and the ID
(if valid), status and summary (title) are populated in the vManager database. Additionally, the name and
random seed of the failing run are appended to the defect record. Optionally, users can set a flag in vManager to
not execute regressions for a design under test (DUT) which is being fixed, thus optimizing execution resources.
Integrations such as this would entail marking that a version or release of a design and testbench combination is
in debug mode, and additional integrations into Source Control Management (SCM) [5] would be necessary
to manage this more detailed flow. Such a marking could indicate to the regression flow to not exercise any
regressions on that version. The diagram below represents this more advanced integration of a bug tracking
system, a verification planning and management system, and a debug environment and the necessary flow
between them.
4
3) The next step planned for development is to enable some second and third order integrations with
Bugzilla. Some further straightforward scripting should allow the user to extract all of the failures associated
with a specific defect ID, and present a report upon resolution of a defect. The verification planning and
management tools integration with job distribution tools such as LSF will then allow the user the ability to
automatically re-run all failures, with identical parameters as the failing run, to determine if the defect has been
corrected for all failing cases. The POC integration could be further extended to include metrics (coverage)
information as well, with the ability to correlate failing tests with a specific section of the verification plan, such
that reports on affected features could be generated, or further exploratory tests could be launched.
5
OpsHub is a provider of Application Lifecycle Management (ALM) integration and migration solutions for
application development organizations. OpsHub creates a unified ALM ecosystem by seamlessly combining
individual ALM systems, enabling agility at scale. A unified global ALM system allows the cross-functional
teams using disparate tools in an application development organization to effectively communicate and
collaborate with each other, thus increasing overall team agility, productivity and efficiency.
The OpsHub solution enables quick migration and seamless integration between leading ALM systems. The
OpsHub solution provides a comprehensive out-of-the-box integration and migration solution within the ALM
ecosystem. Its span across ALM functionality, includes requirements management, source control, bug tracking,
test management, release management, and customer support.
The integration of VPM tools into such a bus, allows a single adapter to the central bus application to be
developed, and enables connectivity to the entire ALM toolset across multiple vendors. This ALM infrastructure
is also not limited to Enterprise Tools, but may be a connectivity mechanism between different vendors EDA
tools, opening up new possibilities to the EDA world.
VIII. SUMMARY
The design and verification community within EDA are continuously searching for new ways to improve
productivity of the processs they must manage. Within semiconductor suppliers, Enterprise systems can no
longer exist as separate islands of information, especially when those systems are an integral process to the design
and verification (DV) process. This paper sets the stage and defines the strategy for how enterprise systems can
seamlessly integrate into the DV process, without requiring users to establish a number of one off and ad-hoc
integrations. This paper also provides a proof of concept of such integration utilizing a standardized API
framework, such that users do not need to learn custom APIs, and can focus their attention on improving their
silicon application.
REFERENCES
[1] H. Carter, S. Hemmandy, Metric Driven Verification An Engineers and Executives Guide to First
Pass Success 2007 Springer Science+Business Media, LLC
[2] http://en.wikipedia.org/wiki/Representational_state_transfer
[3] http://www.cadence.com/cadence/newsroom/features/Pages/vmanager.aspx
[4] http://www.bugzilla.org/
[5] http://en.wikipedia.org/wiki/Source_Control_Management
[6] http://en.wikipedia.org/wiki/Application_lifecycle_management
[7] http://www.opshub.com/main/