Vcqa
Vcqa
Update packages may be used between editions. The manual printing date changes when a
new edition is printed. The contents and format of this manual are subject to change without
notice.
Rev: 447e070
© Copyright 2024, Vector Informatik, GmbH All rights reserved. No part of the material
protected by this copyright notice may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying, recording, or by any informational storage
and retrieval system, without written permission from the copyright owner.
This computer software and related documentation are provided with Restricted Rights. Use,
duplication or disclosure by the Government is subject to restrictions as set forth in the
governing Rights in Technical Data and Computer Software clause of
Vector Informatik reserves the right to make changes in specifications and other information
contained in this document without prior notice. Contact Vector Informatik to determine
whether such changes have been made.
Third-Party copyright notices are contained in the file: 3rdPartyLicenses.txt, located in the
VectorCAST installation directory.
2
TABLE OF CONTENTS
Introduction 10
VectorCAST/QA Overview 11
How It Works 11
Initial Setup 14
Preliminary Setup 15
Launch VectorCAST/QA 16
Create a New System Testing Environment 16
Instrument For Coverage 22
Set Coverage Type 22
Instrument for Coverage 22
Append Cover IO 23
Close the Environment 24
Edit Testing Script and Build 25
Edit the systems_tests.py script 25
Build the Instrumented Code 29
Run a Test Interactively 30
Run a Single Test 30
View the Execute Log 31
View Coverage Results 32
Execute All Tests 33
3
Enabling Component Coverage 47
Building and Executing on Components 48
Viewing Component Coverage Data 50
Change Impact Report 51
Estimated Change Impact Report 51
Full Change Impact Report 52
4
Instrument for Statement Coverage 99
Instrument for Branch Coverage 100
Instrument for Basis Paths Coverage 101
Instrument for MC/DC Coverage 102
Suppressing MC/DC Analysis 103
Instrument Statement+MC/DC 104
Instrument Statement+Branch 105
Instrument for Function Coverage 105
Instrument for Function + Function Call Coverage 106
Instrument for Probe Point 107
List the Number of Bytes Needed to Instrument Each Unit in an Environment 107
Change the Type of Coverage 107
Un-Instrument Source Files 108
Re-Instrument All Source Files 109
Avoid Instrumenting Sections of Source Code 110
Compiling the Instrumented Source Files 111
Simple Compilation Examples 111
The Cover I/O File 112
To Customize the Cover I/O File 113
To Change the Output Options for Test Results 114
To Create a Test Result File 116
Viewing Coverage Results 116
To Add a Test Result 116
To Update a Test Result 118
To Open the Test Results Viewer 119
The Notes Tab of the Test Results Viewer 120
The Requirements Tab of the Test Results Viewer 121
To Open the Coverage Viewer 123
To Turn on Coverage Results 123
The Coverage Viewer 124
To Customize the Coverage Viewer 130
Map Line Selection to Original Source View 130
To Determine Which Test Result Covers a Line 130
To Delete All Result Files 131
Viewing Coverage Reports 131
To View the Metrics Report 132
To View the Function Call Report 135
To View a Basis Paths Report 136
Understanding Basis Paths 137
To Build Test Cases from Basis Path Analysis 137
MC/DC Coverage 138
To View the Equivalence Matrices Report 139
Understanding MC/DC Analysis 140
To View an MC/DC Report 142
To Print Coverage Reports 142
To Save Coverage Reports to Disk 143
To View Code Coverage Summary 145
5
Viewing Execution Flow with Animated Coverage 149
To Activate Coverage Animation 149
To Play the Coverage Animation 151
The Animation Toolbar 151
To Set a Breakpoint 152
Importing Coverage Results 152
To Import Coverage Results 152
To Export a Coverage Script 155
To Import a Coverage Script 156
Delete Imported Results 156
Setting Compiler Options 157
Options: C/C++ Tab 159
To Automatically Add Include Path and Defined Variables 165
To Test Your Compiler Settings 168
Setting Coverage Options 186
General Coverage Options 186
Instrumentation Options 192
MC/DC Options 205
Miscellaneous Options 209
Suppressed Coverable Functions Option 214
Result Changes 218
Input / Output Options 218
CLICAST-Only Coverage Options 220
Setting Coverage Viewer Options 221
To Format the Text in the Coverage Viewer 221
To Change the Fonts 221
To Reset the Fonts to Default 222
To Change Text Color 222
To Change Background Color 223
Other Background Colors 224
Cover Environment Options 224
Auto Generate Test Result Names 224
Append to Coverage Data File 225
Starting Unit Number 225
Types File Extension (Ada) 226
Ada File Extensions 226
Setting Report Options 227
Report Content Options 227
Report Format Options 234
6
Covered By Analysis (CBA) 244
To Add Coverage Analysis 244
Using the Coverage Analysis Editor 245
To Edit an Existing Analysis Result 248
Viewing CBA Coverage in the Coverage Viewer 249
To Change Covered-By-Analysis Display Color 250
Working With Analysis Files 251
To Import Analysis Files 251
To Export Analysis Files 251
To Remove Analysis Files 251
Re-Instrumenting With CBA Data 251
Viewing Analysis Data in Reports 251
Covered By Analysis Report 251
Aggregate Coverage Report 254
Metrics Report 255
Using Coverage Analysis With SFP 255
To Add Coverage Analysis 256
To Add or Remove Coverage Analysis for Multiple Lines 256
To Add or Remove Coverage Analysis for MC/DC 256
To Edit an Existing Result 257
Changes for the Covered By Analysis Report 257
The Aggregate Report 257
7
Inject Spurious Values 276
Patch Faulty Code 276
Probe Point Listing 277
Probe Point Output Report 278
Using the Probe Point API 279
The Probe Point File 279
Probe ID Number 280
Export a Probe Point 281
Import a Probe Point 282
Enable, Disable and Remove Probe Points 282
Create a Probe Point Report 283
8
Using PC-lint Plus for Static Code Analysis 312
Integration with PC-lint Plus 312
Using CodeSonar for Static Code Analysis 312
Integration with CodeSonar® 312
Configuring CodeSonar 312
Running CodeSonar Analysis 315
Viewing CodeSonar Results 316
Using Generic Analysis 319
Configuring a Generic Analysis Tool 319
Creating the Executable Script 321
Running Generic Analysis 322
Viewing Generic Analysis Results 323
Customizing Generic Analysis Messages 324
Index 332
9
Introduction
VECTORCAST/QA OVERVIEW 11
VectorCAST/QA Overview
VectorCAST/QA is a system testing automation tool with code coverage. VectorCAST/QA provides
the following benefits:
System Testing means testing the entire integrated application to verify user-facing functionality. With
the automation provided by VectorCAST/QA, the complexity of running system test is taken care of by
the tool, allowing anyone to run the tests at any time and fostering collaboration.
VectorCAST/QA is designed for use with build automation systems such as Jenkins. This extends the
traditional DevOps infrastructure to include embedded device software and allows for a common set of
processes for all the software in your project. Using a DevOps infrastructure allows you to efficiently
perform system tests and to test common code across multiple configurations.
VectorCAST/QA leverages the VectorCAST Change-Based Testing technology to reduce the testing
cycle time. Change-Based Testing uses advanced heuristics path and call tree analysis to identify the
subset of tests affected by each source change and then automatically reruns only those tests. This
incremental execution of tests is much faster than executing the entire test suite, which can take hours.
It also means that when the QA team does run the full test suite, they are likely to have a clean test run.
In short, VectorCAST/QA allows team members to collaborate on test activities, shortens test times
and provides up-to-date metrics on release readiness.
How It Works
VectorCAST/QA integrates with your build system and existing test infrastructure to collect key
metrics such as code complexity, test case status, and code coverage data.
No changes to your existing work flow or tools are required. As your normal system testing activities
take place, a data repository is constructed. This repository is queried to provide answers to questions
such as "What tests do I need to run for this set of code changes?"
> Test Collaboration - VectorCAST/QA allows users to easily run all flavors of test without
needing to learn new tools or processes. Connectors are configured once and then leveraged by
the entire team.
> Integrated Code Coverage - VectorCAST/QA automates the capture and maintenance of code
coverage data during testing, allowing users to quickly identify untested portions of the application
and determine resources needed to improve testing thoroughness.
> Change-Based Testing - Using the data gathered from the build system and from monitoring
system test activities, VectorCAST/QA identifies correlations between tests and code. As the
code changes, it automatically computes the minimum set of tests required to provide complete
testing of the change.
> Change Impact Analysis - Change Impact Analysis can be used to identify the impact of a set of
source code changes on the quantity of testing required. This provides developers with the ability
to make better decisions when implementing their changes.
> Test Case Maintenance - Legacy test cases are often poorly documented and seldom evaluated
for improvement as the application matures. VectorCAST/QA provides visibility into the parts of
the application that each test stimulates, allowing you to gauge the value of each test and identify
redundant tests.
> Continuous Testing - VectorCAST/QA integrates easily with Continuous Integration (CI)
Servers such as Jenkins to allow tests to be distributed across a farm of physical or virtual test
machines.
Initial Setup
Before you begin:
These example Work Flows are run on a Windows platform and the following assumptions are made:
> You are running on Windows (note that Linux is also supported, but our examples are run on
Windows).
> VectorCAST 2024 is installed.
> You are using the FlexLM license server.
> You are licensed for VectorCAST/QA Enterprise.
> You have source code located under one parent directory (note that multiple directories are
supported, but the Work Flow examples assume only one directory).
> You are able to build the source code on Windows.
> You have test cases that run on Windows.
The Work Flow examples use Lua source code. The Lua application is an open source project for a
language interpreter for the Lua language. It is shipped with VectorCAST and everything that you need
to run these examples is located in the VectorCAST installation directory under
Examples\cover\lua_demo.
The Lua example already has all the source code under one parent directory, provides us with a way to
build the source code on Windows, and also provides us with test cases that run on Windows.
lua_demo contains three main directories: doc, src, and tests, as well as a Makefile and a
README file.
All source code is located in the src directory. The source is built with the command ‘make generic’.
Create an empty directory. The directory path should not contain any blanks. The directory can be
located anywhere, but in our example it is located in C:\ and named qa_example. The make target
'generic' refers to the Lua platform target and works with our examples.
The test cases contained in the tests directory are Lua programs that test different parts of the Lua
interpreter application and are part of the open source project. The tests are run by adding them to the
Lua command line.
Preliminary Setup
1. Open a Windows Command Prompt.
Set VECTORCAST_DIR to your installation directory.
Set VECTOR_LICENSE_FILE to your license file.
set VECTORCAST_DIR=<installation-directory>
set VECTOR_LICENSE_FILE=<license-file>
2. Create an empty directory. The directory path should not contain any blanks. The directory can be
located anywhere, but in our example it is located in C:\ and named qa_example. Open your
newly created directory. In our example we change to qa_example.
cd c:\
mkdir qa_example
cd qa_example
3. For our example, we will use the Lua application source located in the VectorCAST installation
directory under Examples\cover\lua_demo.
Copy the lua_demo directory to the newly created qa_example directory. Note that you must
name the directory ExampleSource, since our script will look for that name.
xcopy /S "%VECTORCAST_DIR%\Examples\cover\lua_demo\*"
%CD%\ExampleSource\
We now have an ExampleSource directory containing three subdirectories: doc, src, and
tests. The src directory contains all the source code for the example. The tests directory
contains the test scripts to test Lua. The application is built using the Makefile found in the
ExampleSource directory.
Launch VectorCAST/QA
1. Use the following command to start VectorCAST/QA.
c:\qa_example>%VECTORCAST_DIR%\vcastqt
Confirm that your working directory is your newly created source directory. In our example, our
working directory is C:/qa_example. If it is not correct, set it to the correct directory by
selecting File => Set Working Directory... from the Menu Bar.
2. The Create New Environment dialog opens on the Choose Compiler pane. Select the compiler
VectorCAST MinGW => C using the Choose Compiler drop-down menu. Verify that the C/C++
box is checked for the language of the source files, and that the Ada box is unchecked. Click the
Next button.
3. The Create New Environment dialog opens on the Name the Environment pane. Enter
LuaSystemTesting-QA for the Project Name. Enter LuaSystemTesting for the name of the
4. Locate the source files and set the Base Directory by clicking the file browser button to the right of
the Base Directory field. Use the file browser to navigate to the location of our source directory at
C:\qa_example\ExampleSource\src. Click the Choose button.
5. Highlight the source directory to display the contents (files and folders) of the source directory in
the Directory Contents pane on the right. We will be including our header files in our environment,
so we also must check the Show Header Files box at the lower left corner of the window. All of
the source files are listed in the Directory Contents pane on the right, and all are unchecked.
Check the box by the src directory to include all the files in the directory. By default, all of the files
are checked as included.
Uncheck source files in the Directory Contents pane to exclude them from the Cover
environment. For our example, we will select all 61 files. Click the Next button.
Tip: We recommend that you include source files by checking directories. Doing so makes
it easier to sync your environment with your source repository.
6. The Coverage Options pane opens. Select the Coverage Type to be set for the environment after
building it. We will use Statement coverage, which is the default coverage type. Click the Next
button to view the environment Summary.
7. The Summary page shows the 61 files that we have selected to be added to our environment.
Click the Build button to build the environment.
9. If the Environment View panes are not already open, double-click the Environment node to open
the environment in the Environment View panes.
10. Close the environment by selecting File => Close Environment from the Menu Bar.
2. You will see a message in the Message window as the files are instrumented for Statement
coverage.
Append Cover IO
Some files should have the c_cover_io.c file appended during instrumentation. By doing so we
eliminate the need to add the <cover env>/c_cover_io.c file on the command line when
compiling the instrumented sources. Typically, only one unit per Cover environment is selected in order
to avoid redefinition errors during compilation, and it is usually the one with the main() function. In our
example, we are selecting two source files, lua.c and luac.c, to append.
1. In the Environment view, right click on the source file lua.c and select Append Cover IO from
the context menu.
2. The source file will be immediately reinstrumented. Repeat the step for the source file luac.c
Notice that the status panel now indicates that Statement coverage has been initialized.
1. Right-click on the LuaSystemTesting Environment node in the Project Tree and select System
Testing => Edit Script from the context menu.
> The directory location where we run the make command (self.locationWhereWeRunMake)
> The top level make command (self.topLevelMakeCommand)
> The directory location where we run the tests (self.locationWhereWeRunTests)
> The name of the test executable (self.nameOfTestExecutable)
> The test case names (self.masterListOfTestCases)
> The command used to run the tests (implementation for function commandToRunATest)
> The interpretation of the test results (implementation for the function
interpretTestResults)
The following steps will discuss each modification to the script required for our example.
3. First, for our example, three variables are created to help find the location of the main
ExampleSource directory, the source code directory (src), and the test scripts directory
(tests).
self.example_dir = os.path.abspath('ExampleSource')
self.source_dir = os.path.join(self.example_dir, 'src')
self.tests_dir = os.path.join(self.example_dir, 'tests')
4. Next, the PATH environment variable is updated to point to the compiler used in our example.
self.locationWhereWeRunMake = self.example_dir
self.topLevelMakeCommand = os.path.join(os.environ['VECTORCAST_
DIR'], 'MinGW', 'bin', 'mingw32-make generic')
self.locationWhereWeRunTests = self.tests_dir
self.nameOfTestExecutable = 'lua.exe'
11. Next we define the interpretation of the test results. The function uses 2-tuple, where the first
element represents the number of passed expecteds and the second element represents the total
expecteds. These values are displayed in the "Expecteds" column in the Test Case Summary
Report. (To access the Test Case Summary Report, right-click on an Environment node in the
Project Tree and select Reporting => Test Case Summary Report from the context menu.)
For our example, the tests do not provide any easily interpreted return values, so the tuple is
simply set to 1,1 (meaning one expected value is passed out of one for all of the tests).
return 1,1
12. Save your changes to the LuaSystemTesting_system_tests.py file by selecting File =>
Save from the Menu Bar. Alternatively, select the Save button on the Toolbar.
Also note that when the build is complete the Build Status column in the Project Tree now shows
green check marks indicating successful build status. Hover over the check marks to see popups
with the build details. Hover over the environment node to see the location of the build directory
and the coverage application strategy.
2. The Run System Tests dialog opens. We will run a single test case, all.lua. Highlight the
all.lua test case located in the Test Name column and select the Run Selected button.
3. The Manage Incremental Rebuild Report is displayed. Note that only our manually selected test
2. The Coverage Viewer opens in the MDI window. For more information on viewing coverage
results, see "The Coverage Viewer" on page 124.
2. The Manage Incremental Build Report is displayed. Note that all our tests are run, but
VectorCAST/QA Enterprise with Change-based testing preserves the previous result for
To open and edit the Configuration Script, right-click on a Cover Environment node and select System
Testing => Edit Script from the context menu.
The Python Configuration Script opens in the Script Editor. Modify the script as required and save
changes.
The configuration script opens in the Script Editor. Scroll down to the list of system test case names
and add the new test name to the end of the Python list. In our example, we add the test case "w".
Save the script changes. Right-click on the environment node and select Execute => Full from the
context menu. VectorCAST will execute only the newly added test.
A manual test case is implemented by right-clicking on the environment node and selecting System
Testing => Edit Script from the context menu.
The Python Configuration Script opens in the Script Editor. Scroll down to the list of system test case
names and add the manual test entry to the Python list using the following syntax:
ManualTestCase('<TestName>','[<TestSteps>]','<NameOfTestExecutable>')
where the parameters required for the ManualTestCase class are <TestName> and
<NameOfTestExecutable>. The parameter <TestSteps> is optional.
For example:
ManualTestCase ("manualTest", self.getManualTestCaseSteps(), getShell())
On Windows platforms, test case names are case insensitive. To avoid unexpected results, be sure to
give each test case a unique name that does not rely on case to differentiate. For example, use
Save the script changes. To run a manual test case, right-click on the environment node and select
Execute => Interactive from the context menu. The Run System Tests dialog opens.
You can filter the list by entering the Test Name. You can also refine the list by enabling the Test Status
(Not Run, Failed, or Passed). The selection of multiple statuses is supported. By default, the Failed
and Not Run test statuses are enabled.
For our example, we will run only the manualTest test case. Our test case has not been run, so it
appears in the listing because the Not Run Test Status is enabled. To run the test, highlight the Test
Name, manualTest, and select the Run Selected button.
VectorCAST opens the Test Runner dialog for the manual test.
The fields displayed in the Test Runner dialog match the parameters passed to the ManualTestCase
class. Click the Start Test button to run the test executable application.
A DOS command window opens displaying an input prompt. Select the test to be run. In our example,
we enter "N" to run the GetNextParty test.
When the manual test is completed, record the results by clicking the appropriate Pass or Fail button on
the Test Runner dialog. The dialog closes and the test results are added to the Cover environment.
Clicking the Cancel button closes the dialog and does not add the test results to the total.
In our example we selected the Pass button at the conclusion of our manual test. Open the Coverage
Viewer for manager.c and note that coverage is now shown for the Get_Next_Party_To_Be_
Seated function.
> Always - In this mode, instrumented files are always stored in the Source Code Directory.
> During Build - In this mode, the instrumented files are copied during the build process.
> Never - In this mode, the instrumented files are never copied during the build process.
When VectorCAST is commanded to initialize coverage for a coverage type for a file or set of files, it
reads the source code, instruments it for the coverage type requested and then stores the instrumented
files.
The location where the instrumented files are stored is configurable using the Apply Coverage to Source
Tree Option. There are three choices for this option: Always, During Build and Never. The default is
During Build.
To set the option, right-click on the System Testing environment in the Project Tree, and from the
context menu select System Testing => Apply Coverage to Source Tree, and then select the
desired option.
When the instrumentation is complete, you will see that each instrumented source code file has an
associated file with the file extension .vcast.bak. The file with the .vcast.bak extension is the un-
instrumented file that was stored as backup. If VectorCAST is commanded to un-instrument the file,
then the original file is restored from the backup.
Note: When migrating a monitored Cover environment, note that the option cannot be set to
Always. This prevents having two Cover environments pointing to the same instrumented in-
place source files.
For example, if a change is made to the project's source files, performing an incremental build and
execute will perform only the work that needs to be done based on the source change.
Build/Execute => Incremental from the context menu. For unit environments and migrated Cover
environments, VectorCAST will fully build or rebuild environments and execute all affected or missing
tests.
For example, the example report below shows that one test (InitializeWB) was affected due to a source
file change in whitebox.c.
The Manage Incremental Rebuild Report shows that only one file was instrumented and only one test,
InitializeWB, was run as a result of the source code change.
To select tests, right-click on a System Test environment in the Project Tree and select Execute =>
Interactive from the context menu.
The Run System Tests dialog opens. You can filter the list by Test Name and Test Status (Not Run,
Failed, or Passed), allowing you to quickly select a specific set of tests to run. The selection of multiple
statuses is supported. By default, the Failed and Not Run test statuses are enabled.
You can further refine the set of tests to be run by highlighting the Test Name(s) and selecting the Run
Selected button. The button will display the total number of tests being selected to run. By default,
VectorCAST runs all of the tests listed when no selections are made.
In our example, we will only run two tests, AddIncludedDessert and Get_Check_Total.
Select the Run Selected button to execute the list of selected test cases. The Manage Incremental
Rebuild Report is displayed. Note that only our two manually selected test cases are executed.
Lists the system tests and their execution status for the project or the
specified --level and environment. (default='')
Executes all system tests for the given environment. To specify a sub-
Component Coverage
VectorCAST/QA provides support for component coverage. Component coverage is often used when
an entire instrumented image is too large to fit on a target. When component coverage is active, the
instrumentation of the application is performed for one component at a time, and the full set of tests is
run for each component. Using component coverage reduces the amount of memory required to run
tests since only a portion of the application is instrumented at any time.
The system_tests.py script opens in the Script Editor. Modify the script as required and save
changes. For our example, we first uncomment the following line of code to enable component
coverage for two components, manager_component and database_component, and save the
change:
Component names must be unique. Note that the system_tests.py script lists the source files that
make up each component.
In the example above manager_component is defined on line 95. The source files that make up
manager_component are listed. This set of files will be instrumented at the same time. Source files
can be added or removed from a component by editing the configuration script and saving the changes.
When component coverage is active, clicking on a Cover environment in the Project Tree and selecting
Build/Execute => Interactive from the context menu opens the Select Tests dialog. Select the tests
to execute based on Component or based on Test Case.
To select by Component, select the Component radio button. A list of available components for the
environment is shown in the left pane. Highlight a component and the associated tests are listed in the
right pane. Check the desired test(s) and select the Run button.
To select by Test Case, select the Test Case radio button. A list of available tests is shown in the left
pane. Highlight a test case and the associated components are listed in the right pane. Check the
desired component(s) and select the Run button.
Properties Dialog
Right-clicking on an environment node in the Project Tree and selecting Properties from the context
menu opens the Properties Dialog for the Workspace Attributes. The currently built component is listed.
The current component can also be quickly identified by hovering over the environment in the Project
Tree and displaying the associated tool-tip.
For example, you may be considering re-factoring a widely-used function. The Change Impact Report
shows that the planned changes will trigger the need to re-run 85% of the existing tests. Knowing this in
advance, you might choose to postpone the planned changes until schedule pressure eases.
To generate a Change Impact Report, select Project => Change Impact Report... from the Menu Bar.
The Change Impact Report Configuration dialog opens.
The dialog provides access to both the estimated Change Impact Report for unit test and Coverage
environments and the Build/Execute Dry Run report for Coverage environments.
Note: All tests that touch a file are considered affected by any change to that source file. This
may differ from the actual action taken by incremental build/execute.
Select the Generate Report button. The title of report is "Change Impact Report (File-based
Estimate)." When generated in the GUI, the report is saved automatically in the
<project>/build/vcast_data directory.
Note that only those tests that touch a changed function are considered affected. This method is slower
due to preprocessing, but produces output matching the actual actions taken by incremental
build/execute.
Select the Generate Report button.The title of report is "Change Impact Report." When generated in
the GUI, the report is saved automatically in the <project>/build/vcast_data directory.
The File => Set Working Directory command is used to change the current working directory before
creating a new environment. This command is not available when an environment is open. When you
select this command, a dialog appears enabling you to navigate to the directory of choice or create a
new directory.
If you select a directory which contains spaces or does not have read / write permissions, you will
receive the following error and be prompted to return to the Set Working Directory dialog to set a valid
directory.
The “Remember last working directory” option is located on the Tools => Options dialog, GUI tab.
Whenever this option is enabled, VectorCAST opens in the last-used working directory, regardless of
how the VectorCAST application is invoked.
When you start the VectorCAST application from the command line, the working directory is the
directory from which you invoked the application, unless the “Remember last working directory” option
is set.
When you start VectorCAST from the Windows Start menu, the working directory is the Environments
directory in the VectorCAST installation directory, unless the “Remember last working directory” option
is set. On Windows, you can change the default working directory by modifying the Windows properties
of the VectorCAST shortcut.
Note: The working directory must not contain spaces and must have Read and Write
permissions. If you start VectorCAST from such a directory, the status bar has a red outline. The
Set Working Directory error dialog appears, prompting you to enter a valid directory.
If you install VectorCAST in the C:\Program Files\ directory, then the Environments directory
will have spaces in its path and therefore will be invalid. If this happens, set the working directory
to a different location on your drive.
The current working directory is indicated in the status bar in the main application window.
If you start VectorCAST from such a directory, the status bar has a red outline. The Set Working
Directory dialog appears, prompting you to enter a valid directory.
To create a System Testing environment using the Wizard, select File => New => System Testing
Environment from the Menu Bar.
Alternatively, click the New button on the Toolbar, or click the arrow to the right of the button
and select System Testing Environment.
> Choose a C/C++ compiler template from the Choose Compiler button drop down menu, or enter
the commands for preprocess and compile. See "To Set the Working Directory" on page 55 for
more information (required if C/C++ tab enabled).
> Add paths to the list of Include directories (optional). These directories are searched for header
files needed for preprocessing.
> Add or remove source file extensions on the Language sub-tab if your C/C++ source code uses
anything other than .c for C source files, or .cpp, .c++, .cxx, .cc for C++ files (optional)
> Add or remove source file extensions on the Ada tab if your Ada source code uses anything other
than .ada, .ADA, .adb, .ADB, .a, or .A (optional).
> Add #defines needed during preprocessing of C/C++ source files (optional)
If the C/C++ tab is enabled, you must specify a valid preprocessor command and compile command on
the Preprocessor/Compiler tab, and on the Linker/Debug tab you must specify a valid linker command
for the Next button to be enabled.
However, only the preprocessor command, compile command, and the Parser Flags are used during
instrumentation if the option “Preprocess files before instrumenting” is on.
with that name exists already, the name is outlined in red and the Next button is dimmed.
> Specify the default system test template (optional)
A template is used to create the system test script for the given environment. For example, the
Google Test template should be chosen to connect with an existing Google Test framework.
> Specify the path to the vcshell database (vcshell.db) (optional)
> Select the command verb used for the source files to be tested in the environment (optional)
Once you have provided the project and environment names, click the Next button to proceed to Step 3:
Locate Source Files.
First, a Base Directory must be specified. The Base Directory List shows the Base Directories, or
source root directories, for the source code repositories that comprise the environment. Click the arrow
on the upper left to expand the Base Directory List.
To add a repository to the list, select the Add Base Directory button , or right-click in the pane and
select Add... from the context menu. Enter the name of the new directory and select the Add button.
Note when adding directories that base directories cannot overlap. Overlapping base directories are
displayed in red in the Base Directory List, and the text box is displayed with a red outline.
This specifies the location where source files reside. This can be set to an
environment variable or relative path.
Base Directories can be renamed by right-clicking on the directory and selecting Rename... from the
context menu. The Rename Base Directory dialog opens. Enter the new name for the directory and
select the Rename button.
To remove a Base Directory from the list, highlight the directory and select the Remove Base
Directory button , or right-click on the directory and select Remove... from the context menu. A
dialog is displayed confirming that you want to remove the selected directory.
After this base directory is removed from the environment, the allowlisted
items within this directory will no longer be updated.
Next, you must enter the path to the Base Directory. The directory path can be an environment variable
(for example, $(ENV_VARIABLE), a relative path, or an absolute path.
Note: It is recommended that the path to the Base Directory be portable. Although supported,
the use of an absolute path is discouraged.
Use the file browser button located to the right of the Base Directory field to navigate to the location of
the source directory and select the Choose button.
The path to the Base Directory is saved in the environment script (.enc). This means only a single
change is needed if loading the environment script in a different directory.
Highlight the source directory to view the contents in the Directory Contents pane on the right.
Selecting the checkbox next to the directory includes all the files in the directory. When a directory is
checked, it is added to the allowlist. When a user's source repository changes (for example, a file is
added or removed from the allowlisted directory) and the user updates the environment or runs the
environment script (.enc), the Cover environment automatically identifies any changes that have
occured in that directory.
Note: To take advantage of the automatic syncing of the files in the Cover environment, it is
recommended that you check the topmost directories in your base directory and then denylist
those files and directories where you do not want to collect code coverage. This allows the
Cover environment to evolve with the source code.
In the Directory Contents pane, use the check boxes next to the source file names to include and
exclude files from the environment. Checked source files are allowlisted and included in the
environment. Unchecked files are denylisted and excluded from the environment.
The topmost checked paths are added to the allowlist. It is recommended to allowlist those parent
directories that contain source code. Subdirectories and files can be unchecked. Any unchecked items
are denylisted from the Cover environment.
The given path will be included in the base directory's allowlist. By adding
paths to the allowlist, the cover environment can compare its state with
Denylisted files and directories will be excluded from the cover environment
on update.
To include header files in the environment, check the Show Header Files box at the lower left corner of
the window. When checked, header files are listed alongside the source files in the Directory Contents
pane. Checked headers are used for code coverage. Note that checking this also sets the VCAST_
COVERAGE_FOR_HEADERS option.
> Specify the default value of coverage type to set for the environment immediately after building it
(optional). When setting this option, the environment coverage type is set, but no files are
instrumented when the environment is built. Note that coverage type None is not allowed. The
default coverage type is Statement.
> Specify the coverage perspective (optional). The default perspective is Translation Unit.
> Specify the coverage I/O type (optional). The default I/O type is Real Time.
> Specify the instrumentation directory (optional). The default is vc-inst.
> Change instrumentation options or static memory options (optional).
Step 5: Summary
This page of the Cover wizard shows a summary of the files to be added to the cover environment,
based on the selections made on the Locate Source Files window.
Filtering is available to locate source files of interest. Filter by clicking in the filter row and typing into the
row. In the example below, the list has been filtered to only show source files containing "manager" in
the file name.
To clear the filter, highlight the filter row, right-click and select Clear Filter from the context menu.
Click Build to create the System Testing environment. No coverage instrumentation is performed.
Once the System Testing environment is built, you are able to use VectorCAST/QA as you normally
would to set options, instrument, add or remove source files, etc.
To create a standalone Cover environment using the Wizard, select File => New => Coverage
Environment from the Menu Bar.
Alternatively, click the New button on the Toolbar, or click the arrow to the right of the button
and select Coverage Environment.
> Choose a C/C++ compiler template from the Choose Compiler button drop down menu, or enter
the commands for preprocess and compile. See "To Set the Working Directory" on page 55 for
more information (required if C/C++ tab enabled).
> Add paths to the list of Include directories (optional). These directories are searched for header
files needed for preprocessing.
> Add or remove source file extensions on the Language sub-tab if your C/C++ source code uses
anything other than .c for C source files, or .cpp, .c++, .cxx, .cc for C++ files (optional)
> Add or remove source file extensions on the Ada tab if your Ada source code uses anything other
than .ada, .ADA, .adb, .ADB, .a, or .A (optional).
> Add #defines needed during preprocessing of C/C++ source files (optional)
If the C/C++ tab is enabled, you must specify a valid preprocessor command and compile command on
the Preprocessor/Compiler tab, and on the Linker/Debug tab you must specify a valid linker command
for the Next button to be enabled.
However, only the preprocessor command, compile command, and the Parser Flags are used during
instrumentation if the option “Preprocess files before instrumenting” is on.
Once you have provided the environment name, click the Next button to proceed to Step 3: Locate
Source Files.
First, a Base Directory must be specified. The Base Directory List shows the Base Directories, or
source root directories, for the source code repositories that comprise the environment. Click the arrow
on the upper left to expand the Base Directory List.
To add a repository to the list, select the Add Base Directory button , or right-click in the pane and
select Add... from the context menu. Enter the name of the new directory and select the Add button.
Note when adding directories that base directories cannot overlap. Overlapping base directories are
displayed in red in the Base Directory List, and the text box is displayed with a red outline.
This specifies the location where source files reside. This can be set to an
environment variable or relative path.
Base Directories can be renamed by right-clicking on the directory and selecting Rename... from the
context menu. The Rename Base Directory dialog opens. Enter the new name for the directory and
select the Rename button.
To remove a Base Directory from the list, highlight the directory and select the Remove Base
Directory button , or right-click on the directory and select Remove... from the context menu. A
dialog is displayed confirming that you want to remove the selected directory.
After this base directory is removed from the environment, the allowlisted
items within this directory will no longer be updated.
Next, you must enter the path to the Base Directory. The directory path can be an environment variable
(for example, $(ENV_VARIABLE), a relative path, or an absolute path.
Note: It is recommended that the path to the Base Directory be portable. Although supported,
the use of an absolute path is discouraged.
Use the file browser button located to the right of the Base Directory field to navigate to the location of
the source directory and select the Choose button.
The path to the Base Directory is saved in the environment script (.enc). This means only a single
change is needed if loading the environment script in a different directory.
Highlight the source directory to view the contents in the Directory Contents pane on the right.
Selecting the checkbox next to the directory includes all the files in the directory. When a directory is
checked, it is added to the allowlist. When a user's source repository changes (for example, a file is
added or removed from the allowlisted directory) and the user updates the environment or runs the
environment script (.enc), the Cover environment automatically identifies any changes that have
occured in that directory.
Note: To take advantage of the automatic syncing of the files in the Cover environment, it is
recommended that you check the topmost directories in your base directory and then denylist
those files and directories where you do not want to collect code coverage. This allows the
Cover environment to evolve with the source code.
In the Directory Contents pane, use the check boxes next to the source file names to include and
exclude files from the environment. Checked source files are allowlisted and included in the
environment. Unchecked files are denylisted and excluded from the environment.
The topmost checked paths are added to the allowlist. It is recommended to allowlist those parent
directories that contain source code. Subdirectories and files can be unchecked. Any unchecked items
are denylisted from the Cover environment.
The given path will be included in the base directory's allowlist. By adding
paths to the allowlist, the cover environment can compare its state with
Denylisted files and directories will be excluded from the cover environment
on update.
To include header files in the environment, check the Show Header Files box at the lower left corner of
the window. When checked, header files are listed alongside the source files in the Directory Contents
pane. Checked headers are used for code coverage. Note that checking this also sets the VCAST_
COVERAGE_FOR_HEADERS option.
> Specify the default value of coverage type to set for the environment immediately after building it
(optional). When setting this option, the environment coverage type is set, but no files are
instrumented when the environment is built. Note that coverage type None is not allowed. The
default coverage type is Statement.
> Specify the coverage perspective (optional). The default perspective is Translation Unit.
> Specify the coverage I/O type (optional). The default I/O type is Real Time.
> Specify the instrumentation directory (optional). The default is vc-inst.
> Change instrumentation options or static memory options (optional).
Step 5: Summary
This page of the Cover wizard shows a summary of the files to be added to the Cover environment,
based on the selections made on the Locate Source Files window.
Filtering is available to locate source files of interest. Filter by clicking in the filter row and typing into the
row. In the example below, the list has been filtered to only show source files containing "manager" in
the file name.
To clear the filter, highlight the filter row, right-click and select Clear Filter from the context menu.
Any newly added source files within the base directory will be added to the
cover environment. Any source files that no longer exist will be removed from
the cover environment.
Once the Cover environment is built, you are able to use the environment as you normally would to set
options, instrument, add or remove source files, etc.
> environment_name.bat
> environment_name.sh
The command also creates a Cover environment script (.enc) in XML format. The .enc file contains
source file information as well as any Probe Points n the environment. The .enc file is called by the
.bat/.csh file.
If you have imported coverage results present in the environment, an additional file is created:
This command enables you to generate the minimum set of files that allow for the regeneration of the
tool options, test environment, and all source files. The test environment directory does not need to be
maintained when you are not actively testing.
These regression files contain everything you need to recreate the test environment. You simply run the
batch file or shell script and VectorCAST performs the following:
cover_demo.bat (Windows)
del commands.tmp
To Rename an Environment
Choose File => Rename Environment to give the current open environment a new name. A dialog
appears with the current name at the top and the new name is at the bottom. If the new name is not
unique, then it is outlined in red. Type a new name and click OK or Apply.
Once you click OK or Apply, VectorCAST renames the environment directory and the .vce file.
To Update an Environment
From the Update Environment wizard, you can:
Choose Environment => Update Environment... from the Menu Bar to open the Update Environment
Wizard.
Note: Base Directories cannot be added, removed, or modified when the environment is opened
within a VectorCAST project and the Update Environment wizard is requested. However, the
Allow/Deny list can be updated.
For more detailed information on compiler settings, see "Step 1: Choose Compiler" on page 57 in the
"Creating a System Testing Environment" section.
Note: If the Cover environment was created without a base directory but contains source files, a
base directory will automatically be created for the user.
For more detailed information on locating source files, see "Step 3: Locate Source Files" on page 68 in
the "Creating a System Testing Environment" section.
Step 3: Summary
To view how your source code repository differs from the sources within a Cover environment, select
Step 3: Summary. The Summary provides the following information:
A Status column is provided which indicates the status of each file in the environment (Existing,
Removed or Added). Hovering over a status icon will indicate whether a file is newly added, retained or
removed from the environment.
Use the toggle buttons located at the top of the Status column to show / hide the file types. By default,
all toggle buttons are selected. The Status column heading includes the total number of files being
displayed.
To compute status, the set of files on disk are collected by walking over the directories and files given in
the allowlist. Denylisted items are excluded. The status is determined as follows:
> Existing files are those that have been found on disk and that are currently in the Cover
environment.
> Removed files are those that are in the Cover environment, but not on disk.
> Added files are those that are on disk but not in the Cover environment.
When all updates are complete, select the Update button to update the environment.
Any newly added source files within the base directory will be added to the
Cover environment. Any source files that no longer exist will be removed from
the Cover environment.
Note: Newly added sources will not be instrumented when the Update button is selected. To
instrument newly added sources for the default coverage type, perform a build within the
VectorCAST project.
The Cover Environment script is not created automatically after an environment build. Use one of the
following methods to create the script:
> Click the Save button in the Create New Cover Environment wizard.
> Create a regression script by selecting The Environment => Create Regression Script... from
the Menu Bar. The Cover Environment script is automatically created when creating regression
scripts.
> Load an .enc file from within the Create New Cover Environment wizard. Click the Load button
and select the appropriate .enc file.
Create cover environment from an enc script. Use the 'force' argument to
recreate the environment. Use the 'verbose' argument to display success
messages.
To Delete an Environment
The File => Delete Environment command deletes the current open environment or enables you to
select an environment to delete. Deleting an environment deletes the sub-directory that was created for
that environment and all files contained in the sub-directory. It leaves the .env file that can be used to re-
create the environment later. Note that the test cases are not saved for you. To avoid losing any work,
you should create Regression Scripts before deleting.
Note: The directory created by VectorCAST when building a new environment should only
contain VectorCAST-created files. Do not store any files that are not tool-specific in these
directories. When the File => Delete Environment command is invoked, all files contained in
the specified directory are deleted.
If you wish to delete the current open environment, choose Environment =>Delete Environment. You
are asked to confirm deleting the open environment.
If no environment is open when you choose File => Delete Environment, the standard Choose File
dialog appears, providing the opportunity to navigate to find any environment to delete. Locate the .vce
or .vcp file for the environment you want to delete. Click the Open button.
To Open an Environment
There are several ways to open an environment.
> Click the Open button in the toolbar or use File => Open if you need to navigate to find the
environment you want.
> Choose File => Recent Environments to open a recently-opened environment from a list. The
most recently-opened environment is at the top of the list. Coverage environments are shown with
a green icon ; Ada environments with an orange icon ; and C environments with a blue
icon .
> Double-click the environment’s icon (Windows)
To use this method, the VECTORCAST_DIR environment variable must be set at the system level.
The Windows installation program does this for you.
> Use the command line:
To use this method, the VECTORCAST_DIR environment variable must be set in the shell window.
Once you open an environment, the working directory is set to the location of the newly-opened
environment.
Because VectorCAST/QA writes data to the .vcp file on closing the environment, you are restricted
from opening a System Testing environment that is write-protected.
Select Environment => Add Source Files => Add Files... from the Help Menu to bring up the
standard File Open dialog. This enables you to choose which file(s) in your application are to be
instrumented for code coverage. Alternatively, select the drop-down menu next to the Add Source File
icon on the Toolbar and select Add Files.... You can select more than one file by holding the
Ctrl or Shift button down as you highlight filenames with your mouse.
You can add source code files to an environment by dragging-and-dropping the files from a File Explorer
window onto the Source Files pane. When you use drag-and-drop to add source files, VectorCAST only
allows you to add files with a recognized source file extension.
Once source files have been added to a Coverage environment, they are listed in the Source Files pane.
The three columns to the right of the filename column provide coverage status information for
Statement, Branch, and MC/DC Pairs. These columns are currently empty because no coverage type
has been selected yet.
The name and path to each source file is saved in the envname.vcp file, along with the location of its
instrumented version. Each unit is assigned a unit number, starting with 1 by default.
Source files with the same name from different locations may be added to a Cover environment.
Select Environment => Add Source Files => Add Source Files Recursively... => <language>
from the Help Menu to bring up the standard File Open dialog. This enables you to add all source files
found in the starting directory and all sub-directories. You choose either C and C++ source files, or Ada
source files for <language>. Alternatively, select the drop-down menu next to the Add Source File icon
Choosing C/C++ or Ada brings up the standard File Open dialog. From there, choose which directory
you want to start in, and click OK. VectorCAST adds all source files with a recognized source file
extension for that language to the Coverage environment.
If you are using Base Directories, the directory given will be allowlisted. For more information on using
Base Directories, see "Step 3: Locate Source Files" on page 68.
Source files having the same name in different locations may be added to a Cover environment.
By default, VectorCAST looks through 100 directories for source files and asks if you want to continue.
When this limit is reached, a message appears:
If you chose the wrong starting directory and you want to abort the operation and not add any
directories, click Cancel. If you want to continue searching the next 100 directories for source and
header files, click Yes. If you want to stop the operation but retain the source files already added, click
No.
This limit, called “Maximum directories added recursively” is configurable. It is located on the Tools =>
Options dialog, GUI tab.
To include header files when creating a new environment, go to Step 3 Locate Source Files in the
Create New Environment wizard. Check the Show Header Files box at the lower left corner of the
window. When checked, header files are listed alongside the source files in the Directory Contents
pane. Checked headers are used for code coverage. Note that checking this also sets the VCAST_
COVERAGE_FOR_HEADERS option.
To include header files when updating an existing environment, go to Step 2 Locate Source Files in the
Update Environment wizard. Check the Show Header Files box at the lower left corner of the window.
When checked, header files are listed alongside the source files in the Directory Contents pane.
Checked headers are used for code coverage. Note that checking this also sets the VCAST_
COVERAGE_FOR_HEADERS option.
After instrumentation, the Coverage Viewer of a source unit expands lines where the header file was
#included to show the full text of the header file. The header files are not listed in the Source Files Pane
as source code units.
Only header files located in the same directory as a source unit and header files located in Search
directories are expanded so as to achieve code coverage. (Header files in Library Include directories or
Type-handle-only directories are not expanded so as to achieve code coverage.)
To “unflatten” the display, right-click on any source file in the Source Files pane, and uncheck Flatten.
When you uncheck “Flatten,” the Source Files pane changes to a hierarchical representation of the
location of the source files in the file system.
Note: The source code can be modified by first clicking the “Writable” checkbox at the bottom of
the Text Editor. Once modified and saved, the source file must be re-instrumented and re-
compiled for the changes to take effect.
The following sections discuss common scenarios associated with working with base directories.
For information on suppressing test code coverage, see "Suppressing Test Code Coverage" on page
216.
In cases where a source code repository is identified as missing, VectorCAST supports the user in
opening such an environment and generating reports, but instrumenting and other features are not
available.
Note: When in Source File perspective, Aggregate Coverage reports and the Coverage Viewer
are not able to display the source code when the repository is missing.
However, when the source code repository is not missing but just moved to a new location (on a
machine of the same platform on which the environment was created), VectorCAST supports updating
the Base Directory path.
Note: Moving or copying base directories for Cover environments that are part of a VectorCAST
System Test Project is not supported.
The following sections discuss three common scenarios associated with moving base directories.
clicast -e <cover env> Cover Base_dir Add <name of base directory> <full path
to repository>
clicast -e <cover env> Cover Base_dir SEt_path <name of the base directory>
<path to new location>
Alternatively, when opening a Cover environment using the VectorCAST GUI after the source code
repository was moved, the Update Environment Wizard automatically displays, prompting the user to
adjust the Base Directory path in the Wizard.
The path should be set to the root of the original Base Directory so that the directory structure is
retained. This permits the allow list, deny list, and source files to be loaded into the Update Environment
wizard.
Note: Any missing Include directories located on the Choose Compiler page will also need to be
updated to point to their new locations.
Navigate to the Summary page to verify that the source files in the original Base Directory are
preserved.
After clicking the Update button, the user can proceed with normal functionality such as generating
reports, opening the Coverage Viewer, and instrumenting sources.
clicast -e <cover env> Cover Base_dir Add <name of base directory> '$(ENV_
VAR)'
where ENV_VAR is the name of the environment variable that has been set to
the root location of the source repository.
For example, suppose the environment variable SOURCE has been set to '/home/work/source1'
and a Cover environment created with a Base Directory added like this:
clicast -e <cover env> Cover Base_dir Add SourceRoot '$(SOURCE)'
Later, if the repository has been moved to the directory '/home/work/source2', or simply the code
in the directory 'source2' is now to be used in the Cover environment, then the environment variable
is updated to reflect the new location. In this case, opening the Cover environment is seamless and no
further updates need to be made.
Note: The VectorCAST GUI must be closed when the environment variable is set to a new path.
For example, if the Cover environment is in a directory named 'work', and the source code repository
in 'work/source', then a Base directory named SourceRoot can be added like this:
clicast -e <cover env> Cover Base_dir Add SourceRoot source
When both the environment and source code repository are moved together, the environment can be
used without needing to update the Base Directory path.
Setting Coverage
To Set Coverage Type
Any source file added to an environment is set by default to the environment coverage type. Once a
source file has been added to an environment, you may then indicate which type of coverage you want,
and you may assign a coverage type different from the environment coverage type. Different source
files can be set with different types of coverage, if appropriate. For example, code with few branches
may be best suited for Statement coverage, while code with many branches may be better suited to
Branch coverage.
You can choose from several types of coverage. The coverage type choices are dependent upon the
current Industry Mode. The Default industry mode offers the following coverage types:
The Default Industry coverage types map to industry types as shown in the following table:
Statement Level A Unit Level ASIL A SIL 1/2 SIL 1/2 Class A
Level B Unit Level ASIL B/C SIL 3 SIL 3 Class B
Level C Unit Level ASIL D SIL 4 SIL 4 Class C
The default coverage type is set at the environment level. To set the default coverage type for the
environment, select Coverage => Set Coverage Type => <coverage_type> from the Menu Bar. The
current default coverage type is indicated with a checkmark on the Coverage Type menu.
To set the coverage type for an individual source file, right-click a source file in the Source Code Files
window and select the coverage type from the context menu.
Note: Changing the Industry Mode may affect the coverage type displayed in the coverage type
column of the Environment View. In some instances when Industry Mode is changed, the new
mode may not have a coverage type corresponding to the current environment coverage type. In
such a case, no coverage type in the list will appear with a checkmark against it.
Set the coverage type for a cover environment or source file. Use --clear to
clear the coverage type for a source file or for all source files within the
environment.
For example, to set the coverage type for the entire environment, enter the following:
This allows the user to change the environment coverage type and consequently the coverage type of
all the files within the environment that are set to the default coverage type.
To set the coverage type for a specific file, enter the following:
To clear the coverage type for a specific file, enter the following:
This resets the source coverage type to be the default coverage type.
To clear the coverage type for all files, enter the following:
This resets the source coverage type of ALL files to be the default coverage type.
Instrumenting Files
Note: When changing coverage types, instrumentation does not occur.You must select
Instrument from the right-click menu to instrument an individual source file, or re-instrument all
files by selecting Environment => Re-Instrument All Source from the Menu Bar.
After instrumenting a file, its coverage status column contains a status bar for the selected coverage.
The status bars show the percentage of code coverage achieved for each file. For example, with a file
that has 50% coverage, half of the status bar will be green and half will be white. This allows you to
quickly see a dashboard of code coverage percentages.
Additionally, you may hover over a file name to see the exact coverage percentages.
If the Source Files pane is displaying the files in a hierarchy, instrumentation applies to the source files
in the selected directory.
Instrument for code coverage. Specify a coverage type to set the Environment
Coverage Type before instrumentation. Specify a unit and coverage type to set
the Source Coverage Type before instrumentation. Set coverage type as None to
uninstrument. Note that uninstrument will not change the Environment or
Source Coverage Type.
If another unit with the same base name is already present in the
environment, then a full path or a relative path (to working directory) must
be specified.
1 5 * switch (Order.Entree)
{
case NO_ENTREE :
1 6 break;
case STEAK :
1 7 * Table_Data.Check_Total = Table_Data.Check_Total + 14.0;
1 8 * break;
case CHICKEN :
1 9 * Table_Data.Check_Total = Table_Data.Check_Total + 10.0;
1 10 * break;
...
To instrument a source file for Statement coverage, first check to see that the environment coverage
type is set to Statement. (If not, set the coverage type at either the environment level or at the individual
file level.) Then select the source file, right-click and choose Instrument.
Statement coverage maps to the Industry Modes as shown in the following table:
Statement Level A Unit Level ASIL A SIL 1/2 SIL 1/2 Class A
Level B Unit Level ASIL B/C SIL 3 SIL 3 Class B
Level C Unit Level ASIL D SIL 4 SIL4 Class C
To instrument Statement coverage when using Industry Modes, choose Coverage => Set Coverage
Type => Statement based on the current Industry Mode. Then select Environment => Reinstrument
All Source from the Menu Bar.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Each branch point will have space for either one or two condition values. Boolean decision points (i.e., if
statements) are displayed with two place holders, because the condition can be either true or false, as
in the following example:
Switch statements are handled by treating each “case” as a single decision branch point, as in the
following example:
1 1 switch (Order.Entree)
{
1 2 ( ) case NO_ENTREE :
break;
1 4 (T) case STEAK :
Table_Data.Check_Total = Table_Data.Check_Total + 14.0;
break;
1 6 (T) case CHICKEN :
Table_Data.Check_Total = Table_Data.Check_Total + 10.0;
break;
...
To instrument a source file for Branch coverage, first check to see that the environment coverage type
is set to Branch. (If not, set the coverage type at either the environment level or at the individual file
level.) Then select the source file, right-click and choose Instrument.
Branch coverage maps to the Industry Modes as shown in the following table:
To instrument Branch coverage when using Industry Modes, choose Coverage => Set Coverage
Type => Branch based on the current Industry Mode. Then select Environment => Reinstrument
All Source from the Menu Bar.
Each branch point will have space for either one or two condition values. Boolean decision points (i.e., if
statements) are displayed with two place holders, because the condition can be either true or false, as
in the following example:
1 0 (T) Add_Included_Dessert
1 1 (T) (F) if((Order->Entree == STEAK &&
Order->Salad == CAESAR &&
Order->Beverage == MIXED_DRINK))
{
Order->Dessert = PIE;
}
else
1 3 (T) (F) if((Order->Entree == LOBSTER &&
Order->Salad == GREEN &&
Order->Beverage == WINE))
{
Order->Dessert = CAKE;
}
}
Switch statements are handled by treating each “case” as a single decision branch point, as in the
following example:
1 1 switch (Order.Entree)
{
1 2 ( ) case NO_ENTREE :
break;
1 4 (T) case STEAK :
Table_Data.Check_Total = Table_Data.Check_Total + 14.0;
break;
1 6 (T) case CHICKEN :
Table_Data.Check_Total = Table_Data.Check_Total + 10.0;
break;
...
To instrument a source file for Basis Paths coverage, first check to see that the environment coverage
type is set to Basis Paths. (If not, set the coverage type at either the environment level or at the
individual file level.) Then select the source file, right-click and choose Instrument.
Instrument for Basis Paths coverage. Specify a coverage type to set the
Environment Coverage Type before instrumentation. Specify a unit and coverage
type to set the Source Coverage Type before instrumentation. Set coverage
type as None to uninstrument. Note that uninstrument will not change the
Environment or Source Coverage Type.
To instrument a source file for MC/DC coverage, first check to see that the environment coverage type
is set to MC/DC. (If not, set the coverage type at either the environment level or at the individual file
level.) Then select the source file, right-click and choose Instrument.
Note: VectorCAST displays a progress bar when initializing MC/DC or Level A coverage on a
source file having an expression with more than 10 sub-expressions.
MC/DC coverage maps to the Industry Modes as shown in the following table:
To instrument MC/DC coverage when using Industry Modes, choose Coverage => Set Coverage
Type => MC/DC based on the current Industry Mode. Then select Environment => Reinstrument
All Source from the Menu Bar.
Pairs Status
When MC/DC coverage is instrumented, an additional “Pairs” metric is available in the coverage
Metrics Report, Environment => View => Metrics Report and the Test Results Management Report,
Environment => View => Management Report
Pairs Status consists of the total number of MC/DC pairs satisfied compared to the total number of
MC/DC pairs in the unit.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Instrument for MC/DC coverage. Specify a coverage type to set the Environment
Coverage Type before instrumentation. Specify a unit and coverage type to set
the Source Coverage Type before instrumentation. Set coverage type as None to
uninstrument. Note that uninstrument will not change the Environment or
Source Coverage Type.
The comments must exist on lines with no other text, immediately preceding the source code line for
which you want to modify the coverage. The text can be upper- or lowercase.
Note: For C and C++ source code, the comments should be inside the function’s curly brackets;
for Ada, the comments should be inside the begin block. There can be no spaces after the
comment characters.
For example:
//VCAST_DONT_DO_MCDC (C example)
or
/*VCAST_DONT_DO_MCDC*/
or
--VCAST_DONT_DO_MCDC (Ada example)
Specific statement to which you do not want MC/DC coverage applied
//VCAST_DO_MCDC (C example)
or
/*VCAST_DO_MCDC*/ (C example)
or
--VCAST_DO_MCDC (Ada example)
Specific statement to which you DO want MC/DC coverage applied
MC/DC instrumentation can also be suppressed on multiple lines of code by placing a pair of comments
around the code in the source file where suppression is desired.
/*VCAST_DONT_DO_MCDC_START*/
/*VCAST_DONT_DO_MCDC_END*/
In the following example, the comments direct VectorCAST to avoid applying MC/DC instrumentation
on a && b when MC/DC or Statement+MC/DC coverage is used. MC/DC tests will also omit test
generation for a && b.
/*VCAST_DONT_DO_MCDC_START*/
if (a && b)
{
}
/*VCAST_DONT_DO_MCDC_END*/
Instrument Statement+MC/DC
Statement+MC/DC coverage maps to the Industry Modes as is shown in the following table:
To instrument a source file for Statement+MC/DC coverage, first check to see that the environment
coverage type is set to Statement+MC/DC. (If not, set the coverage type at either the environment
level or at the individual file level.) Then select the source file, right-click and choose Instrument.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Instrument for Statement and MC/DC coverage. Specify a coverage type to set
the Environment Coverage Type before instrumentation. Specify a unit and
coverage type to set the Source Coverage Type before instrumentation. Set
coverage type as None to uninstrument. Note that uninstrument will not change
the Environment or Source Coverage Type.
Instrument Statement+Branch
To instrument a source file for Statement+Branch coverage, first check to see that the environment
coverage type is set to Statement+Branch. (If not, set the coverage type at either the environment
level or at the individual file level.) Then select the source file, right-click and choose Instrument.
Statement+Branch coverage maps to the Industry Modes as is shown in the following table:
To instrument Statement+Branch coverage when using Industry Modes, choose Coverage => Set
Coverage Type => Statement+Branch based on the current Industry Mode. Then select
Environment => Reinstrument All Source from the Menu Bar.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Instrument for Statement and Branch coverage. Specify a coverage type to set
the Environment Coverage Type before instrumentation. Specify a unit and
coverage type to set the Source Coverage Type before instrumentation. Set
coverage type as None to uninstrument. Note that uninstrument will not change
the Environment or Source Coverage Type.
Function coverage maps to the Industry Modes as is shown in the following table:
To instrument Function coverage when using Industry Modes, choose Coverage => Set Coverage
Type => Function based on the current Industry Mode. Then select Environment => Reinstrument
All Source from the Menu Bar.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Function + Function Call coverage maps to the Industry Modes as is shown in the following table:
To instrument Function + Function Call coverage when using Industry Modes, choose Coverage =>
Set Coverage Type => Function+Function Call based on the current Industry Mode. Then select
Environment => Reinstrument All Source from the Menu Bar.
As VectorCAST instruments the test harness, a process dialog appears and a message is displayed in
the Message window:
Before you can inspect coverage results, you need to execute a test case.
Instrument for Function and Function Call coverage. Specify a coverage type
to set the Environment Coverage Type before instrumentation. Specify a unit
and coverage type to set the Source Coverage Type before instrumentation. Set
coverage type as None to uninstrument. Note that uninstrument will not change
the Environment or Source Coverage Type.
To instrument a source file for Probe Point coverage, first check to see that the environment coverage
type is set to Probe Point. (If not, set the coverage type at either the environment level or at the
individual file level.) Then select the source file, right-click and choose Instrument.
Instrument for Probe Point coverage. Specify a coverage type to set the
Environment Coverage Type before instrumentation. Specify a unit and coverage
type to set the Source Coverage Type before instrumentation. Set coverage
type as None to uninstrument. Note that uninstrument will not change the
Environment or Source Coverage Type.
Create a text report in csv format detailing the number of bytes used to
instrument all units for 16-bit and 32-bit architectures. The report is saved
to <ENV_NAME>_ram_usage.csv unless <filename> is provided.
An example of the report obtained on a Cover environment using the C tutorial files instrumented for
MCDC coverage is:
The RAM usage for "Mcdc Overhead" is affected by the value of the option "Maximum MC/DC
expressions" (VCAST_MAX_MCDC_STATEMENTS).
A Confirm Result Changes dialog may open, alerting the user to a change in instrumentation type for
the source file. Note that the coverage data in the result files will be removed. To retain any Notes and
Requirements, select the checkbox in the bottom pane. Select the Make Change button to confirm. A
To solve this problem, either delete the unit from the Cover environment or replace the missing unit in
the original location.
After reinstrumenting a source file, the test results may be invalid. A Confirm Result Changes dialog
opens, identifying the files that have changed. The coverage data in the result files will be removed. To
retain any Notes and Requirements, select the checkbox in the bottom pane. Select the Make Change
button to confirm. A process dialog appears as the file is re-instrumented.
Re-Instrument all source files, retaining the same coverage type but using
the current Coverage I/O type and options.
/*VCAST_DONT_INSTRUMENT_START*/
C or C++ code that you do not want to be instrumented
/*VCAST_DONT_INSTRUMENT_END*/
//VCAST_DONT_INSTRUMENT_START
C++ code that you do not want to be instrumented
//VCAST_DONT_INSTRUMENT_END
The text can be upper- or lowercase, but there can be no spaces between the comment characters and
the text of the delimiter. Care must be used to ensure that the start and end delimiters are within the
same scope. That is, if you have a case statement, you do not insert the delimiters such that half of the
case statement is not instrumented and half is instrumented for code coverage.
For C and C++, you must ensure that your compiler’s pre-processor leaves comments in the
preprocessor output. For most compilers, this option is 'dash capital C' (-C). To turn on this option, go to
the Tools => Options dialog, C/C++ tab, and add -C to the Preprocessor command.
vcast_dont_instrument_end
vcast_dont_do_mcdc_start
vcast_dont_do_mcdc_end
vcast_dont_do_mcdc
vcast_do_mcdc
For example:
#pragma vcast_dont_do_mcdc_start
int a = b && c;
#pragma vcast_dont_do_mcdc_en
Example in C
Cover env with source files instrumented, in the original source file location. The cover I/O files are located
in the environment directory. We include (-I) that directory to get them.
Example in Ada
Cover env with source files instrumented, in the original source file location. The cover I/O files are located
in the environment directory. We include (-I) that directory to get them.
Depending on the language of your source files, you would compile and link one of them with your
instrumented source files.
IMPORTANT: Note that you may need to customize this I/O file to support the particular I/O mode and
I/O API that may exist on your target. See the section below "To Customize the Cover I/O File" on page
113 that describes the changes you may need to make to c_cover_io.c before you include this in your
project and build with the instrumented source files.
Note: If you are using GNAT, you must run gnatchop -w on the instrumented source files
before compiling them into the executable program.
Note that the file <cover env>/coupling/couplingData.c (which implements the coupling
functions) must be compiled and linked when compiling C or C++ source files. This is done in the same
manner as compiling and linking the c_cover_io.c file. See "Build the Instrumented Application" on
page 290 for more information on coupling instrumentation.
In this situation, users can direct VectorCAST to generate the c_cover_io.c/.cpp file with fopen_
s() instead. This is accomplished by adding the defined variable VCAST_USE_FOPEN_S to the C_
DEFINE_LIST in the CCAST.CFG file or by adding it as a defined variable to the compile command for
the instrumented source files.
To use this new file, first copy it from the VectorCAST installation directory, IO subdirectory. (If using
GNAT, you will need to gnatchop it.) Then compile your instrumented source files into an executable.
Note: Coverage types MC/DC and DO-178B Level A are not supported when using this
coverage I/O file. If you instrument one of these types, you will get coverage data that is missing
the MC/DC branch data. That is, MC/DC data will be completely empty, and Level A data will
only include Statement data.
cl -DVCAST_CUSTOM_STD_OUTPUT -c c_cover_io.c
After compiling your instrumented source code with your custom c_cover_io.c file you can run the
resultant executable to produce the coverage data out the serial port. By sure that your serial port cable
is properly hooked up to your test PC and that you have connected via the HyperTerminal application
cl -DVCAST_CUSTOM_FILE_OUTPUT -c c_cover_io.c
This compile command defines VCAST_CUSTOM_FILE_OUTPUT, which causes the user supplied
code to be included instead of the standard default VectorCAST file IO code. If you forget to insert
some user code in these functions, you may get any of the following error messages when you compile
your project. (where Ennnn is a compiler specific error code).
Filename(line_number): Error!
Ennnn: You need to insert code in vCAST_CREATE_INST_FILE()
Filename(line_number): Error!
Ennnn: You need to insert code in VCAST_CLOSE_FILE()
Filename(line_number): Error!
Ennnn: You need to insert code in VCAST_WRITE_TO_INST_FILE_COVERAGE()
cl -DVCAST_STD_IO -c c_cover_io.c
This compile command defines VCAST_STD_IO, which causes the data to be streamed to the
standard out.
Use the following defined variables when you compile the c_cover_io.c or c_cover_io.cpp file to set
options for the test output:
Add in the results file TESTINSS.DAT to the VectorCAST Cover environment whenever you need to
see the coverage results of your actions.
You can also access this command by clicking on the icon in the toolbar.
Add the result file <result-file> to the Cover environment and name it
<results-name>If the optional <results-name> is not specified, VectorCAST
uses the file name as the results name.
The optional <notes text> or <notes file> provides the contents of the
result’s Notes tab, and the optional <requirements text> or <requirements
file> provides the contents of the result’s Requirements tab.
If you choose more than one .DAT file, most likely you have renamed them something other than
TESTINSS.DAT. VectorCAST Cover provides the opportunity to retain those names. When more than
one result file is selected, a dialog appears, asking if you want to name the test result in VectorCAST
Cover the same name as the result file itself. In the example below, the result file is named p.DAT.
Clicking Yes names the result the same name as the file, and repeats for the other files you selected.
Clicking Yes to All adds all of the result files and auto-names them. If you choose No, another dialog
box will prompt you to enter a name for the test result.
To update a test result, right-click it in the Test Results pane and choose Update. Doing so brings up a
dialog box enabling you to navigate to the directory where the TESTINSS.DAT file has been stored.
In this Open File dialog, you are restricted to choosing one .DAT file. Once you click Open, the data in
that file replaces the data in the test result. Any Notes or Requirements already present in the Test
Result are not removed.
Update the test result named <result-name> with <new-results file> and/or
Notes and/or Requirements, if provided.
The notes follow the test result when exported to a coverage script and when imported to another
environment. When exported to a coverage script, they are notated as:
RESULT.NOTES
This test result was created using the defined variable VCAST_APPEND_TO_TESTINSS, which
causes the coverage data from all invocations of the application to be appended to the
same TESTINSS.DAT file.
12/20/2017 4:04:50 PM
RESULT.END_NOTES
To illustrate, the Requirements Gateway will be used to import the requirements.csv file, located in the
%VECTORCAST_DIR%\Tutorial directory.
Select Tools=> Requirements Gateway => Requirements Gateway. When the Requirements
Gateway opens, click the Options tab, and then select CSV as the Subsystem profile and set the CSV
file path to c:\VCAST\Tutorial\requirements.csv.
Click Get Fields and then select the Import tab. In the Attributes to import panel set Key attribute to
Key, ID attribute to ID, Title attribute to Title and the Description attribute to Description. Finally, click
Import.
After the import completes, recompile the source files, execute the tests and import the test results.
When the test result is opened, the Requirements tab contains all requirements on the left. You can
double-click particular requirements to move them to the right, which indicates that they are associated
with this test result.
The requirements follow the test result when exported to a coverage script and when imported to
another environment (which has the same repository setting). When exported to a coverage script, they
are notated as:
RESULT.REQUIREMENTS
FR14
FR15
FR16
FR17
RESULT.END_REQUIREMENTS
By checking more than one results checkbox, you can see the aggregate coverage achieved by those
test results. In the upper right corner of the Coverage Viewer, a summary bar of the coverage is
displayed. For Statement coverage, the display reads “Statements.” Hover over the display to see the
tool tip showing how many statements are covered out of the total, and a percentage. For branch and
MC/DC coverage, the display reads “Branches,” and the tool tip shows “n of m branches covered (%).”
For MC/DC coverage, an additional display reads "Pairs" and the tool tip shows "n of m pairs covered
(%)".
Statement Coverage
When Statement coverage is active, the Coverage Viewer looks like the following example:
The numbers along the left-hand margin indicate the subprogram number and the statement number. An
asterisk (*) indicates that the statement was covered (along with the color-coding).
Branch Coverage
When Branch coverage is active, the Coverage Viewer looks like the following example:
The left-most number indicates the subprogram number, and the second number indicates the decision
number. For Boolean decision points, two sets of parentheses are visible and are filled in with a T or F
when the True or False outcomes have been tested. For switch statements, one set of parentheses is
visible and a (T) is filled in when that branch outcome has been tested.
MC/DC Coverage
When MC/DC coverage is active, the Coverage Viewer looks like the following example:
The left-most number indicates the subprogram number, and the second number indicates the decision
number. For Boolean decision points, two sets of parentheses are visible and are filled in with a T or F
when the True or False outcomes have been tested. For decision points that have multiple components
(e.g., a || b), each sub-condition is displayed on a separate line. The sub-conditions are numbered
.1, .2 to .n, where n is the total number of sub-conditions.
For MC/DC or STATEMENT+MC/DC coverage, the left margin includes a red arrow that, when
clicked, opens the Equivalence Matrices below the associated subprogram in the Coverage Viewer.
Function Coverage
When Function coverage is active, the Coverage Viewer looks like the following example:
The numbers along the left-hand margin indicate the subprogram number. An asterisk (*) indicates that
the function was covered (along with the color-coding). A subprogram is considered covered 100% in
Function coverage if it is entered during test execution, and 0% if it is not.
The numbers along the left-hand margin indicate the subprogram number. An asterisk (*) indicates that
the function was covered (along with the color-coding). A subprogram is considered covered 100% in
Function coverage if it is entered during test execution, and 0% if it is not. Function Call coverage
identifies all of the calls that each function makes to other functions and identifies whether those calls
have been tested.
When the arrow buttons are clicked, the coverage source window scrolls to show the next covered or
uncovered line. Partially-covered lines are considered both covered and uncovered. Contiguous blocks
of lines are skipped over, until a line of a different block type is found or a line off-screen is found.
If there are no lines found, the message “Search reached bottom” or “Search reached top” appears,
depending on which direction you were searching.
In addition, pressing the Home or End key brings the cursor to the beginning or end of the line,
respectively; pressing Ctrl+Home or Ctrl+End brings the cursor to the beginning or end of the file,
respectively.
Use the Expand and Collapse menu items as “filters” or “layers” to get the exact display you want in the
Coverage Viewer.
You can also expand and collapse individual subprograms by clicking the (collapse) and (expand)
icons located on the left of the viewer.
Collapsing and expanding the subprograms in the Coverage Viewer does not affect any of the Coverage
reports.
By default, selecting a line in the Coverage Viewer, Original Source text editor or CBA Editor will
automatically scroll to the corresponding line in any other open editor. To change the behavior, right-
click within the Coverage Viewer or CBA Editor and uncheck the Map Selection to Original Source
View option on the context menu.
Alternatively, in the Original Source text editor, right-click and uncheck the Map Current Line to
Coverage View option on the context menu.
Delete the result file named <result-name> from the Cover environment, or all
results, if “ALL” is specified.
VectorCAST provides the ability to control what coverage data appears in reports containing the
Aggregate Coverage Report or Metrics Report.To set the coverage data used, select Environment =>
View => Coverage data used in reports from the Menu Bar. Then, select one of the sub-menu
options:
> All results (default) - Use coverage data from all test results in the environment, even if
unchecked (unselected) when generating reports.
> Only checked results - Use coverage data only from the checked (selected) test results when
generating reports.
The setting is saved in the Cover environment's project file (.vcp), so it applies to that specific
environment, and carries over to any other environment created in that instance of VectorCAST.
Note: This option does not apply to reports generated via clicast, which uses all results.
The report is sensitive to what level is selected in the Source Files pane. If you select the top-level
node, the Metrics table includes coverage achieved by all test results. You can narrow the content
down by selecting particular source files in the Source Files pane for which you want coverage
achieved. Once you have made a selection, choose Environment => View => Metrics Report.
The report contains coverage data from all test results within the selection, regardless of whether they
are toggled on to show coverage or not.
If coverage data is not available, then only the Unit, Subprogram, and Complexity columns are present
in the Metrics Report.
Extract the Metrics report for all units or the specified unit to standard
output or the specified output file. If VCAST_CUSTOM_REPORT_FORMAT is HTML
and <outputfile> is not specified, the file is saved to metrics_report.html.
If the report format is HTML, the Metrics Report looks similar to the following:
Pairs Status
When MC/DC or STATEMENT+MC/DC coverage is initialized, an additional “Pairs Status” column is
added to the Metrics Report section. MC/DC Pairs consists of total number of MC/DC pairs satisfied
compared to the total number of MC/DC pairs in the unit.
If the report format is HTML, the report looks similar to the following:
The first section of the report displays by function, showing where the corresponding calls to the
function are made from, if each call is covered, and the percentage of covered calls.
The second section of the report displays function calls in which the caller function cannot be
determined. For example, C++ Virtual functions are shown in the Unsupported Function Calls section
of the report for this reason.
To access the Function Call Report, right-click on the source file in the Source Files pane and select
Environment => View => Metrics Report. Note that this option is only available when function call
coverage is present.
environment.
If this menu item is dimmed, choose Tools => Basis Path => Generate Tests.
The number of paths identified equals the code complexity, also referred to as V(g). The code
complexity corresponds to the number of test cases that must be developed to exercise the unique
paths in a subprogram.
This information can also be viewed by accessing the Basis Paths tab of the Coverage window.
Extract the basis path analysis for all units or the specified units to
Test Path 1
(1) if ( value < min ) ==> FALSE
(3) if ( value > max ) ==> FALSE
Test Path 2
(1) if ( value < min ) ==> FALSE
(3) if ( value > max ) ==> TRUE
Test Path 3
(1) if ( value < min ) ==> TRUE
Complexity: 3
In most instances, the value of the branch point condition will make use of a subprogram call, or some
intermediate data computation. In these cases, more work is required to find the value or values
required to force a condition to the desired value.
Basis paths are determined by parsing the code and applying the basis path algorithm to the list of
branch point nodes that result. As a result, it may be impossible to satisfy a particular branch point
when building test cases. The following examples illustrate this point:
Example 1:
const int road_speed_limit = 55;
const int truck_speed_limit = 50;
int index;
Because truck_speed_limit is a constant, and always less than road_speed_limit, the “else” condition
can never be reached.
Example 2:
for (index=1; index<10; index++)
keep_going();
Because the loop boundaries are constant, the loop will always get executed and thus the basis path
calling for the loop to not be entered cannot be satisfied.
MC/DC Coverage
When you have the Coverage Viewer open, and MC/DC or STATEMENT+MC/DC coverage is
initialized, if you hover your cursor over any fully or partially covered line of code, a tool tip displays the
test case(s) which cover the line.
For MC/DC or STATEMENT+MC/DC coverage, the left margin includes a red arrow that, when
clicked, opens the Equivalence Matrix below the associated subprogram in the Coverage Viewer.
Clicking the arrow again closes the panel. The matrices can also be viewed using the Environment =>
View => MC/DC Equivalence Report and viewing the MC/DC Condition Tables Report.
Extract MC/DC Equivalence matrices for all units or the specified unit to
standard output or the specified output file. If VCAST_CUSTOM_REPORT_FORMAT
is HTML and <outputfile> is not specified, the file is saved to mcdc_tables_
report.html.
If the report format is HTML, the report looks similar to the following:
The Source line # refers to the source code line in the Coverage Viewer that corresponds to the
expression being evaluated in the matrix. After noting the line number, right-click the unit in the Source
Files pane, and choose Open Coverage Viewer. Then, in the Coverage Viewer, type Ctrl+F (or
choose Edit => Find from the Menu Bar), and type in the line number.
For each boolean conditional there is a corresponding matrix. Each row in the matrix shows a unique
combination of values for the components of the complex conditional, as well as the result of the
condition for each combination. The equivalence pairs are pairs of rows which demonstrate that the
result of the conditional changes from True to False as one component changes, while all other
components are held constant.
The Equivalence Pair Matrix shows both the static pair analysis and the pair coverage achieved. The
Pair A through Pair n columns (where n is the number of sub-conditions) show candidate row pairs for
proving condition components. An asterisk next to a row number indicates that that row has been
covered. When an equivalence pair has been covered, the Pa through Pn summary lines reflect that
information.
The following example shows a single complex conditional with the corresponding equivalence pair
matrix.
4 0 (T) MCDC_Example
4 1 (T)(F) if ((
4 1.1 (T)(F) a &&
4 1.2 (T)( ) b ) ) {
local = 1;
} }
===== Unit: manager
===== Subprogram: MCDC_Example
===== Condition number 1
Source line: 63
Actual Expression is: ( a && b )
Condition "a" (Ca) is: a
Condition "b" (Cb) is: b
Simplified Expression is: ( a && b )
|-----+-----+-----+-----+-----+-----|
|Row |Ca |Cb |Rslt |Pa |Pb |
|-----+-----+-----+-----+-----+-----|
|-----+-----+-----+-----+-----+-----|
|*1 | T | T | T |3 |2 |
|-----+-----+-----+-----+-----+-----|
| 2 | T | F | F | |1 |
|-----+-----+-----+-----+-----+-----|
|*3 | F | T | F |1 | |
|-----+-----+-----+-----+-----+-----|
|*4 | F | F | F | | |
|-----+-----+-----+-----+-----+-----|
Pa = 1/3 => a pair was satisfied
Pb = 1/2 => no pair was satisfied
This source listing shows that the “if” condition has been executed with A=True, B=True (Row 1);
A=False, B=True (Row 3); and A=False, B=False (Row 4). The MC/DC branch coverage is 5/6. Five
out of a total of six outcomes have been satisfied, and the pair coverage is 1 of 2 pairs satisfied.
The Source line # refers to the source code line in the Coverage Viewer line that corresponds to the
expression being evaluated in the matrix. After noting the line number, right-click the referenced unit,
and choose Open in a New Window. Then type Ctrl+G (or choose Edit => Goto Line), and type the
line number.
This information can also be viewed by accessing the MC/DC tab of the Coverage window.
Extract the MC/DC path analysis for all units or the specified units to
standard output or the specified file. If VCAST_CUSTOM_REPORT_FORMAT is HTML
and <outputfile> is not specified, the file is saved to <env>_basis_path_
report.html.
Choose a source file or files first, then specify which reports you want to print in the Aggregate Report.
An example of the first page of an Aggregate Report that includes Metrics, Coverage, and Basis Paths
is shown below.
The reports are saved in HTML or TEXT format, depending on the setting of the Report Format option.
Note that saving a combined report is only available when the report format is TEXT.
You can choose which sections you want included in each unit’s report by checking the check boxes.
The MCDC option is dimmed unless one of the selected units is instrumented for MC/DC or DO-178B
Level A coverage type. Coverage and Metrics sections are included by default.
When you have selected the content, enter or browse to a directory in which to save the reports. If the
directory is invalid, it is outlined in red.
When you click OK, one report for each unit is saved in the currently selected format, HTML or TEXT.
The report names combine the environment name and the unit name: ENV_unit_report.html (or.txt).
When a report filename is used more than once, "Save Coverage Reports" inserts the unit number into
the report filename.
Note: If the selected source files have not yet been instrumented for coverage, Save Coverage
Reports will show as dimmed on the right-click menu.
Alternatively, from the Menu Bar, select Coverage => Code Coverage Summary => Code
Coverage Summary.
To override tracking of selected units in the Source Files pane, open the drop-down menu for the Data
Summary Report and select Options => Track Current Selection. Remove the check next to the
option. The tracking icon on the Summary table will change to gray to indicate that the summary is
The Code Coverage Summary table is dynamic. When the Track Current Selection option is enabled,
as units are selected and deselected in the Source Files pane, the Code Coverage Summary table
updates in real time reflecting the selections.
The data displayed in the Code Coverage Summary includes the Unit name, the Subprogram name, the
Cyclomatic Complexity (Vg), the Results Count (showing the number of coverage results that touch
each subprogram), and the achieved coverage for each coverage type.
Note: When I/O type is set to "Animation" (e.g. when using Basis Path coverage), the Test
Cases Count column indicates the number of times a function is entered. This total can also
include multiple slot iterations of a compound test.
The Totals row at the top of the table displays the totals for each data column. In the example above,
note that in the cover_demo environment we have a total of 80 statements, of which 47 statements are
covered and 33 are uncovered.
The Summary table updates whenever coverage data is updated. For example, the table refreshes
when coverage is initialized, or following reinstrumentation.
Double-clicking on a line in the Code Coverage Summary opens the corresponding UUT in the
Coverage Viewer.
Coverage data can be accessed all the way down to the individual subprogram level by filtering. Access
the filter by typing into the top row of any column. Clear the filter by right-clicking in the top row and
selecting Clear Filter from the context menu. In the example below, the table has been filtered to only
show data for test cases with Cyclomatic Complexity greater than 2.
Filtering supports the following symbols: <, >, =. For example, the summary can be filtered to show
only subprograms with a Cyclomatic Complexity greater than 2. Other examples of filtering inputs are:
10 - lists subprograms matching the specific value of "10" in the selected column
>50 - lists subprograms greater than the value of "50" in the selected column
<90 - lists subprograms less than the value of "90" in the selected column
=100 - lists subprograms matching the specific value of "100" in the selected column
< - lists subprograms with empty values in the selected column
> - lists subprograms with non-empty values in the selected column
= - lists subprograms with non-empty values in the selected column
Coverage Animation displays the flow of control from one unit to another, using the test’s coverage
data. Calls to stubbed units are not included in Animation. Not-stubbed units are included if they have
coverage initialized using Coverage => Initialize Custom.
Coverage can be animated for only one test case in the at a time. The item currently activated for
Animation has a check in the box to the left of its name in the Test Case Tree. If you check another
item, the Animation toolbar dims until you activate another test result for animation.
Note: If the Animation toolbar is dim, activate a test case for animation.
After setting the Coverage I/O type to Animation, instrument for a type of coverage. If you want to see
each statement being executed, you would instrument for Statement coverage; if you want to see the
choice made at each branch point, you would instrument for Branch coverage. To see each entry point,
decision point, and statement covered, choose MC/DC, STATEMENT+MC/DC, or
STATEMENT+BRANCH.
Next, execute a test case. Right-click the test case in the , and then choose Activate Coverage
Animation from the popup menu:
The Coverage Viewer for the unit containing the first covered line in the test result opens, with the
current line on the first covered line. The coverage checkbox next to the test case is checked.
If not already open, the Coverage Animation toolbar (pictured below) appears in the main toolbar.
Click the Play button to begin the animation. In the Coverage Viewer, the current line progresses
steadily from the first covered line to the last. The speed of each step is 1 second. The source code in
the Coverage Viewer scrolls up, keeping the current line in the middle. If a subprogram in another unit is
called and that unit has coverage initialized, then its tab comes to the front of the Coverage Viewer and
animation continues in that tab.
> the blue arrow indicates the current line in the animation of control flow
Fast Forward Moves the current-line indicator to the last line executed.
To Set a Breakpoint
Setting a breakpoint causes the animated display of the execution flow to pause on the line marked with
the breakpoint . To set a breakpoint, select the line to make it current, then click the Set Breakpoint
button .
> jump to the next breakpoint (or the end, if there is none)
> The same files must be used in both environments. VectorCAST compares the filenames and file
checksums to ensure that the files are the same or copies of each other.
> Only the units that are present in both environments will have their coverage imported.
> The same type of coverage must be used in both environments.
Select the results files from that environment to import. You can choose to import the other
environment’s test execution results, imported results, and Covered By Analysis (CBA) results.
Test execution results are listed as results/<result name>. Imported results in the other
environment are listed as results/IMPORTED_RESULTS/<result name>. CBA results are listed
as results/COVERED_BY_ANALYSIS/<result name>. These names correspond to the locations
where they are stored in the environment directory.
A filter is provided at the top of the pane making it easier to filter out some results when working with a
large set of test results. By entering text or a regular expression in the <<filter>> field, the user can filter
by name.
When you are ready to import the coverage results, click the Import button. To exit the dialog box
without importing, click the Cancel button.
After the import is complete, a Coverage Import/Export log is displayed showing a series of status (S)
or error (E) messages. Some of the messages you may see in the log file:
Not an error. The coverage results that were imported refer to a unit that is not present in the
environment to which the results are being imported. This situation is typical if <file> is stubbed in
the environment to which the results are being imported, but not-stubbed in the other environment.
Therefore, the coverage data for <file> are just being ignored.
> (E) Coverage not on for source file
The source file exists in the environment to which results are being imported, but coverage is not
initialized for that unit. An example is when there are non-stubbed dependent units in the
environment but only the UUT was initialized for coverage. Use Initialize Custom to enable
coverage for non-stubbed dependent units.
> (E) No match found for source file
Although the file names may be the same, the checksums of the source files differ.
> (E) Coverage types differ
The type of coverage in the importing environment is different than the type of coverage in the
environment to which results are being imported. They must match for a successful import.
> (E) No translatable data for result
A result was found in the importing environment, but the data did not match any source file in the
current environment.
> (S) Coverage data was loaded
A result file was successfully imported.
The Import Log file is named IMPORT.LOG and resides in the environment directory.
The Coverage Results pane is a filterable table, making it easier to filter out some results when working
with a large set of test results. By entering text in the <<filter>> field, the list can be filtered by name.
Partial matches are supported.
By default, all results are displayed in the Coverage Results pane. Filter buttons are provided at the top
of the Source column. For environments with test results and imported results, a filter button is
displayed and the options are: All, Imported, and Not Imported. For environments with test results and
CBA results another filter button is displayed, and the filter options are: All, CBA results, and Test
results.To filter by source, click the appropriate drop-down menu in the Source column and select the
desired filter.
There are several options. Choose Coverage => Export Script => Imported Results to create a
script containing only results that were imported to the environment. VectorCAST looks in the
<env>/results/IMPORTED_RESULTS directory for the data to export. Choose Coverage =>
Export Script => Testcase Results to create a script containing only results that are "native" to the
environment. For Cover environments, that refers to test results added after executing the instrumented
application. VectorCAST looks in the <env>/results directory for the data to export.
This script can be imported to VectorCAST Cover or into another VectorCAST environment.
VectorCAST automatically generates a coverage script when regression test scripts are created if
coverage results have been imported.
After the script file is exported, a Coverage Import/Export log is displayed showing a series of status (S)
or error (E) messages. Some of the messages you may see in the log file:
Create a coverage script containing test execution results that are present
in <env>, including imported results. Coverage scripts have the extension
.cvr.
Create a coverage script containing only imported coverage results that are
present in <env>.
Create a coverage script containing only test execution results that are
present in <env>.
Any errors that occur during importing of a coverage script are logged in a file named Import.log. To
view this file, choose Coverage => View Import Log.
Clicking No cancels the operation without removing the imported coverage results. Clicking Yes
removes the imported coverage results from the environment and the IMPORTED_RESULTS directory
from the environment directory.
Remove specified result file, Imported results (not CBA results), or ALL
result files (including imported results and CBA results).
The following table lists the options useful to VectorCAST Cover environments, and which file they are
stored in. For more information, click on the Option Name.
Coverage tab, Options sub-tab Save data in ASCII format in memory CCAST_.CFG
Coverage tab, Options sub-tab Maximum size for ASCII buffer CCAST_.CFG
Coverage tab, Options sub-tab Dump buffered coverage data on exit CCAST_.CFG
Coverage tab, Options sub-tab Use static memory allocation on the target CCAST_.CFG
Coverage tab, Options sub-tab, Show True and False in 'for' loops CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Instrument for function call coverage CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Instrument for function and function call CCAST_.CFG
Instrumentation Options sub-tab coverage
Coverage tab, Options sub-tab, Instrument blocks for statement coverage CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Instrument implicit default case for switch-case CCAST_.CFG
Instrumentation Options sub-tab blocks
Coverage tab, Options sub-tab, Consider case fallthrough for branch coverage CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Treat initialization of data couple as write CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Treat C++ catch blocks as branches CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Show inlines as covered in all units CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Generate basis paths for constant branch CCAST_.CFG
Instrumentation Options sub-tab conditions
Coverage tab, Options sub-tab, Avoid using comma operator in declarations CCAST_.CFG
Instrumentation Options sub-tab
Coverage tab, Options sub-tab, Abort instrumentation when absolute MC/DC CCAST_.CFG
MC/DC sub-tab subconditions limit exceeded
Coverage tab, Options sub-tab, Set absolute limit for MC/DC subconditions to 31 CCAST_.CFG
MC/DC sub-tab
Coverage tab, Options sub-tab, Maximum subconditions for MC/DC table pre- CCAST_.CFG
MC/DC sub-tab calculation
Coverage tab, Options sub-tab, Maximum subconditions for MC/DC table row CCAST_.CFG
MC/DC sub-tab display
Coverage tab, Options sub-tab, Maximum coverage database cache (MB) CCAST_.CFG
Misc sub-tab
Coverage tab, Options sub-tab, Asm functions behave as inlines (C/C++ only) CCAST_.CFG
Misc sub-tab
Coverage tab, Options sub-tab, Keep Notes and Requirements - Imported Data .vcast_qt.ini
Result Changes sub-tab
Coverage tab, Options sub-tab, Keep Notes and Requirements - Test Data .vcast_qt.ini
Result Changes sub-tab
Coverage tab, Options sub-tab, Store Coverage Results in Directory CCAST_.CFG
Input/Output sub-tab
Coverage tab, Options sub-tab, Append to Coverage Data File CCAST_.CFG
Input/Output sub-tab
Coverage tab, Options sub-tab, Execution results go to stdout (C/C++ only) CCAST_.CFG
Input/Output sub-tab
Coverage tab, Cover Environment Auto generate test result names env.vcp
sub-tab
Coverage tab, Cover Environment Append to coverage data file env.vcp and
sub-tab CCAST_.CFG
With an environment open, choose Tools => Options. Alternatively, click the button on the
toolbar and click the C/C++ tab.
Tip: Selecting a Compiler Template loads in settings for other edit boxes automatically. You can
modify these settings if necessary.
Compiler Template
Choosing a compiler template results in all options being set to default values for that compiler. There
are two ways to specify a compiler template:
> Click the Choose Compiler button to display a cascading menu of compilers supported by
VectorCAST.
> Choose from the drop-down list of compiler templates to the right of the Choose Compiler
button.
When using C++ language source files, you must choose a template that has “C++” following the
name. For example, when using C source files and the GNU 3.3 compiler, choose Choose Compiler
=> GNU Native => 3.3 => C. When using C++ files, choose the template Choose Compiler =>
GNU Native => 3.3 => C++, as shown in the figure below.
Set the compiler to <template_name>, and set all related options as well. To
get a list of legal templates, type clicast -lc TEMPLATE all.
Preprocessor/Compiler Tab
The Preprocessor/Compiler sub-tab on the C/C++ tab contains settings pertinent to the compiler and
preprocessor used to compile and link C and C++ source files. Most settings are specified by the
compiler template, but you can add your own defined variables to the compiler command. In addition,
the default list of Search, Library Include, and Type-handled directories can be specified here, to be
used each time a new Create New Environment wizard is started.
Preprocessor Command
Choose Tools => Options, and click the C/C++ tab. Then click the Preprocessor/Compiler tab.
<command> is the command used to preprocess C/C++ source files. Its default
value is set by the compiler template.
Include Flag
Choose Tools => Options, and click the C/C++ tab. Then click the Preprocessor/Compiler tab.
The command line option used to specify include paths to the compiler or preprocessor.
<flag> is the command line option used to specify include directories when
compiling C/C++ files, such as “-I”, “/I”. Its default value is set by the
compiler template.
Define Flag
With an environment open, choose Tools => Options, and click the C/C++ tab. Then click the
Preprocessor/Compiler tab.
The command line option used to specify defined variables to the compiler or preprocessor.
In CLICAST and in the CCAST_.CFG file, this option is C_DEFINE_FLAG. Its default value is set by
the compiler template.
Compile Command
Choose Tools => Options, and click the C/C++ tab. Then click the Preprocessor/Compiler tab.
<command> is used to compile C/C++ harness files. Its default value is set by
the compiler template.
Command line option for the compiler to create an object file. This option is used with the Visual Studio
compiler when using the Stub None option.
Preprocessor File
Choose Tools => Options, and click the C/C++ tab. Then click the Preprocessor/Compiler tab.
Template for the name of the file created by the preprocessor (applicable to certain compilers). Consists
of preprocessor output file name with "?" in place of source file name. For example, if the compiler
convention is to preprocess the file 'manager.c' into 'manager.I', the value for this option would be '?.I'
This option is only needed if your compiler does not send preprocessor output to stdout by default.
Include Directories
With an environment open, choose Tools => Options, and click the C/C++ tab. Then click the
Preprocessor/Compiler tab.
The search, library include, and type-handled directories are useful when you want to pre-process
C/C++ source code units before instrumenting. A compiler template must be specified, as well as
source directories to enable VectorCAST to find the header files it needs to complete the
preprocessing.
To add a directory to the list, click the Add button or Add Recursively button . The Add Source
Directory dialog appears; browse to the directory you want. To delete a path, first select the item you
Once a directory is added, you can change the type. Right-click and choose:
> Set as Search directory, to make the path a default search directory for testable source
units. It has an ‘S’ icon.
> Set as Library include directory, to make the path a default library include directory. It has
an ‘I’ icon.
> Set as Type-handled directory, to make the path a default directory in which VectorCAST
searches for types only, if needed. It has a ‘T’ icon.
The VectorCAST parser searches the directories in order, from top to bottom. You can adjust the order
that the directories are listed by selecting a directory and pressing Ctrl+Up-Arrow or Ctrl+Down-Arrow.
Note: On Windows, directories with spaces in the path name are not supported. If you enter a
path with a space, VectorCAST converts the paths to a DOS-safe path.
Directory containing source code files that you would like to test or stub.
You can have more than one instance of this command in the CCAST_.CFG file,
Directory containing source code files that you would like to parse for type
information. Any entities residing here will not be defined by VectorCAST,
and therefore must be linked in through a library. <type-
handled source directory> becomes a default type-handled directory in the
Create New Environment wizard. You can have more than one instance of this
command in the CCAST_.CFG file, each specifying a different directory.
Defined Variables
Choose Tools => Options, and click the C/C++ tab. Then click the Preprocessor/Compiler tab.
This option provides a list of preprocessor variables and definitions that are used when compiling the
test harness. To add a variable to the list, click the Add button . A dialog appears, with an edit box
for you to type the variable name and value. To enter a defined variable name that contains spaces,
enclose the name in quotes, as in “one two”. When you are done, click OK.
To delete a defined variable, first select the item you want to delete, then click the Delete button .
To edit an entry after it has been added, double-click it. Change the text and press Enter.
VectorCAST provides a customization feature which enables a user-defined function to be called at the
very end of test harness processing, right before the exit() function is called. To use this feature, add
the macro VCAST_CUSTOM_END to the defined variables list.
where myEnd is a function with C linkage and with the signature: 'void myEnd
(void);'.
Alternatively, a #define directive in User Code can be used to define the macro, or VCAST_
CUSTOM_END can be added to the Compile command as a command line macro.
By default, the VectorCAST test harness #defines exit to VCAST_exit. If your source code #includes
the C standard library (which #undefines exit), you may see a compile or link error while building the
environment such as “undefined symbol VCAST_exit” or “undeclared ‘exit’.” Add the defined variable
VCAST_DONT_RENAME_EXIT to the list of defined variables, and then either recompile or relink the
test harness (Environment => Recompile => Automatic or Environment => Relink).
The Parse Command Line button is convenient when you have a complex set of compiler options for
the source code under test. This feature enables you to paste or type your compile command line into
VectorCAST, and then have VectorCAST parse out the include paths and defined variables for you.
Once that is accomplished, you can then add more or delete some, as needed to complete the
configuration.
For example, you may be able to open the log file from the execution of your make and copy the compile
line. Then, in VectorCAST, click the Parse Command Line button, and paste the text into the upper text
edit area of the dialog that is displayed.
After pasting the command line, click the Parse button to have VectorCAST process the command.
The Includes tab and Defines tab are populated with the information extracted from the command line.
For example, if you are using the gcc compiler, and your normal compile command is:
gcc
-I/home/Qt/qt-latest/include
-DBUFFER_SIZE=1024
-DUNIX
-DLINUX
-I/home/TOOLS/libxml2/libxml2-2.6.10-1/include/libxml2/libxml
Paste it into the command line text edit area, as shown below.
Click the Parse button. Using the Include Flag and Define Flag specified on the C/C++ tab of the
Options dialog, VectorCAST parses the command line. The parsing process finds any include paths
and places them, one per line, in the Include tab in the lower part of the dialog, and finds any defined
variables and places them in the Defines tab. The result of our example command line is shown below,
for both the Includes tab and the Defines tab.
By default, each item is selected. When you click OK, items that are selected on the Includes tab are
saved as Search directories to the “Default source directories for Wizard” option, on the C/C++ tab.
Items that are selected on the Defines tab are saved to the “Defined variables” option on the C/C++
tab.
To eliminate an item, unselect it. When you have selected the items you want from the Includes tab
and the Defines tab, click OK.
The Parse Command Line dialog closes, and the Include paths and Defined variables are now present
in their respective edit boxes.
The Test Settings dialog enables you to “try out” the compiler settings on a source file. Once you have
chosen a compiler, filled in the Library Include directories and defined variables etc., use the Test
Settings button to determine if these compiler settings will work when you later attempt to build an
environment. Some reasons that the compiler settings may not work are:
The Test Settings dialog is used to test the settings for the preprocessor, compiler, and the
QuickParser. Choose an action from the Functionality to Test drop-down list.
When Preprocessor is selected, the preprocess command, with any defines or includes, is displayed
in the “Preprocess Command” box. When Compiler is selected, the compile command is displayed in
the “Compile Command” box. When Parser is selected, the Parser Flags are displayed in the “Parser
Flags” box.
Tip: The command box is editable; if an action fails, you can try out different settings until you
get a successful preprocess, compile, or QuickParse. When you done, you must transfer any
changes to the C/C++ tab manually.
Some defined variables may be added by VectorCAST based on options set in the Builder tab. If you
have an environment open, then all paths in the Source directories are listed here as -I<search dir>.
When Parser is selected, the Parser flags are displayed in the “Parser Flags” box.
Clicking the browse file icon ( ) opens a dialog enabling you to choose to a source file. The path to
the file is displayed in the “File to Process:” edit box. Alternatively, you may type a full path to a source
code file to preprocess or compile. Spaces are not allowed in the path. You may use a relative path to a
file, relative to the current working directory, whose path is visible in the status bar of VectorCAST’s
main window.
Click the Test button. VectorCAST uses the displayed command to preprocess or compile the file. A
message dialog appears indicating whether the command was successful or failed.
After acknowledging the status, use the Stdout and Stderr tabs to diagnose any problems. In the
example, below, a non-existing file (“bogus_header.h”) was #included in the source file unit. The
preprocess failed, since the preprocessor could not find that header file. The error is listed in the Stderr
tab, as shown below.
You can select the text and save the output to a file or open the source file from the Stdout and Stderr
tabs to a file. Right-click and choose Select All or Open <filename>.
When you are done testing the compiler settings, click the Dismiss button. You return to the Tools =>
Options dialog, C/C++ tab. If you made any changes to the command being tested, then you must
transfer these changes to the C/C++ tab manually.
This functionality is also available in Help => Diagnostics => Test Compiler Settings.
Linker/Debug Tab
The Linker/Debug tab on the C/C++ tab provides additional options for the compiler’s linker and for
using the compiler’s debugger during test case execution. You can specify the link files for the two test
harness executables here (UUT_INTE, and UUT_INST), or a Green Hills integrate file and Green Hills
intex utility command.
The command line flag to set the name of the linked executable.
File extension used by C/C++ compiler to specify object files, such as “.o”,
“.obj”. Its default value is set by the compiler template.
Linker Command
Choose Tools => Options, and click the C/C++ tab. Then click the Linker/Debug tab.
<command> is used to link object files into an executable C/C++ harness. Its
default value is set by the compiler template.
Linker Options
Choose Tools => Options, and click the C/C++ tab. Then click the Linker/Debug tab.
Any miscellaneous options to pass to the linker, such as –lc to link in the standard C library.
Additional link options used when linking C/C++ harness. Its default value is
set by the compiler template.
Green Hills intex Utility Command and Green Hills Integrate File
Choose Tools => Options, and click the C/C++ tab. Then click the Linker/Debug tab.
VectorCAST invokes the Green Hills intex utility with this command immediately after linking the test
harness. For example:
where vcast.int is a custom integrate file. Your custom integrate file must contain the line
Filename VCAST_FILE to specify the address space where the VectorCAST test harness runs. The
default integrate file vcast.int, located in the $VECTORCAST_DIR/DATA directory contains the
following text:
Kernel
Filename DynamicDownload
EndKernel
AddressSpace vcast
Filename VCAST_FILE
MemoryPoolSize 0x30000
LanguageC++
Task Initial
StackLength0x8000
StartIt false
EndTask
EndAddressSpace
See the option “VCAST_GH_INT_FILE” for information on specifying a custom integrate file.
VectorCAST invokes the Green Hills intex utility with <intex command>
immediately after linking the test harness. <intex command> does not have a
default value.
This is the custom integrate file passed to the Green Hills 'intex' command.
This file should follow the general format of the default file found in the
VectorCAST installation directory, which means it should contain a 'Filename'
line with the text VCAST_FILE (to be replaced with the VectorCAST executable
name) and a 'StartIt' line with the value 'true'.
Debugger Command
Choose Tools => Options, and click the C/C++ tab. Then click the Linker/Debug tab.
The name of your debugger. This command will be executed when you choose “Execute with Debug.”
Command used to invoke C/C++ debugger. Its default value is set by the
compiler template..
This option causes VectorCAST to bring up a shell window to run the debugger.
Its default value is false except for the GNU Native and SCORE compilers.
Language Tab
The Language tab on the C/C++ tab provides options to specify the language mode, the file extensions
for header files, C source files, C++ source files, and assembly files, as well as the compiler’s parser
flags. The options here are set by the compiler template, but can be modified.
Language Mode
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
The language mode option puts the VectorCAST quick-parser into C or C++ mode, and determines the
file extension of the generated test harness source files. If the compiler template has “(C++)” after the
name, then the language mode is set to C++. Otherwise, it is set to “C”.
Header Extensions
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
Any file with any of the listed extensions will be treated as a source code header file. Typical header file
extensions are supported by default. The default values are: .h, .H, .hdl, .hpp, .hxx,
.Hxx, .HXX, .inc. Additional extensions can be added to the list by clicking the button,
allowing you to specify an atypical header file extension.
VectorCAST provides the ability to specify and instrument header files without a file extension. This is
accomplished by selecting the button and selecting <<NO_EXTENSION>> from the extension
drop-down menu.
When entering a new extension, VectorCAST adds a “.” character if one is not provided. There must be
at least one Header extension in the list.
List of file extensions indicating C/C++ header files. Typical extensions are
supported by default; this option is only needed when header files do not
follow normal coding conventions. Its default values are: .h, .H, .hdl, .hpp,
.hxx, .Hxx, .HXX, .inc. To include headers with no extensions, the value
<<NO_EXTENSION>> may be used.
C Extensions
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
Any file with any of the listed extensions will be treated as a source code file. Additional extensions can
be added to the list by clicking the button, allowing you to indicate that files with a given extension
are to be treated as source code files.
When entering a new extension, VectorCAST adds a “.” character if one is not provided.
If the Language Mode is C, then there must be at least one C extension in the list. Additionally, the C++
extensions list is dimmed.
Use this option to specify the C file extensions. Its default value is: c.
C++ Extensions
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
Any file with any of the listed extensions will be treated as a source code file. Additional extensions can
be added to the list by clicking the button, allowing you to indicate that files with a given extension
are to be treated as source code files.
When entering a new extension, VectorCAST adds a “.” character if one is not provided.
If the Language Mode is C++, then there must be at least one C++ extension in the list. The C file
extensions list is enabled in order to identify C source files in a mixed C and C++ environment.
Use this option to specify the C++ file extensions. Its default values are:
cpp CPP c++ C++ C cxx CXX cc CC. (On Windows, the default values are: cpp c++
cxx cc.)
Assembly Extensions
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
List of file extensions indicating assembly code. These extensions are used to determine whether any
startup files should be assembled. If a startup file has a file extension specified on this list, it is
assembled before being used to startup the target device. Additional extensions can be added to the list
by clicking the button, allowing you to indicate that files with a given extension are to be treated as
assembly code files.
When entering a new extension, VectorCAST adds a “.” character if one is not provided.
Use this option to specify the file extensions for assembly files,
particularly paths to assembly files specified by VCAST_STARTUP_FILE. Any
startup file that is detected to be an assembly file is assembled before
being used to start up the target device. Its default values are set by the
compiler, typically s or asm.
If your compiler uses a different extension than the default extensions provided, additional extensions
can be added using the “Add” button. When adding source files recursively, any file matching one of
these extensions is recognized as a source file.
Parser Flags
Choose Tools => Options, and click the C/C++ tab. Then click the Language tab.
Flags to pass to the EDG parser. This option should only be modified under direction of VectorCAST
Technical Support.
This option processes all function templates during parsing. Set this option to allow coverage
instrumentation for template-based functions. Enabling this option will cause uninstantiated templates
to be processed and possibly treated as instantiated. Such processing may reveal source code errors
not reported by a compiler, in which case this option should be disabled or the source code should be
modified.
Misc Tab
The Misc tab on the C/C++ tab provides additional options for the compiler and preprocessor. Each
option here is set by the compiler template.
This option removes the extraneous preprocessor comments that some compilers (preprocessors) put
at the beginning and/or end of a preprocessed file. For example, GCC 3.2 puts the comment # 1
“<built-in>” at the beginning of each preprocessed file. Turning this option on strips these
extraneous comments. By default, this option is enabled for all compilers.
This option is set by the compiler template. Its default value is true.
This option is needed for compilers whose preprocessor does not adequately mark the start and end of
header files in the translation unit. VectorCAST needs reliable line directives to correctly determine
dependency information. Note that this option does not replace the use of an external preprocessor.
Additional processing is performed both before and after the normal preprocess to produce a translation
unit with line directives that accurately mark the start and end of each file section.
This option is set by the compiler template. Its default value is false
except for some target compilers.
Enable this option if your compiler’s preprocessor does not escape backslashes in line directives in
preprocessed files.
This option is set by the compiler template. Its default value is false,
except for some targets such as NEC, Keil, and TriMedia.
The “new generation” preprocessor replaces the VectorCAST preprocessor. This option is needed for
compilers whose preprocessor does not retain comments or removes whitespace in the translation unit.
This option is set by the compiler template. Its default value is false
except for some target compilers.
This option allows you to specify a file that will be #included to source files during preprocessing.
The default value is none, with the exception of Code Composer Studio C6xxx
compilers.
Post-Preprocess Command
Choose Tools => Options, and click the C/C++ tab. Then click the Misc tab.
This option allows you to specify a command to be executed after preprocessing a file. VectorCAST
will pass 3 arguments to this command: the original file path, the preprocessor output file path, and the
path to an output file that does not yet exist. If the external command creates the new output file, it will
be used in place of the original preprocessor output, such as for parsing. It will also become the basis
for any instrumentation, such as for code coverage. This option can be used to specify a custom
external script which transforms non-standard code constructs into code suitable for the VectorCAST
parser. When possible, modify original source files directly rather than use this option.
During environment build, whenever VectorCAST modifies a unit for stub-by-function, Visual Studio
whitebox, or code coverage, it may first preprocess the unit to expand macros and header files. This
option tells VectorCAST which expanded header files should be collapsed (replaced with the original
#includes) before compiling.
The default setting for this option is set by the compiler template.
Choose None if your code contains macros defined in seach directory files, but which affect
compilation of code in non-search directory headers. Setting the option to None avoids collapsing any
files. None is selected for Microsoft compilers.
Choose either System Headers or Non-search directory headers if the headers contain code that
cannot be compiled unless physically located in its original file location. These settings are selected by
most compiler templates and collapse files not in Library Include or Type-Handled directories.
Note that changing the value for VCAST_COLLAPSE_STD_HEADERS could also impact test
harness compilation.
Note: Contact VectorCAST Technical Support before changing the value of this option.
Environment Files
Choose Tools => Options, and click the C/C++ tab. Then click the Misc tab. This option specifies the
files that need to be copied into the environment directory during environment build. The default value is
set by the compiler template.
To add a path, click the Add Path button . Browse to the location of the environment file, and click
OK. To modify a path, double-click it to make it editable. You can include environment variables in the
format $(ENV_VAR) by editing the CCAST_.CFG file. To delete a path, select it and click the Remove
Path button .
<path> is the full path to a file that needs to be copied into the
environment directory during environment build. Multiple paths are separated
by a comma. Its default value is set by the compiler template.
The settings on the Mixed C/C++ tab are used when compiling and linking C source files in a C++
environment. Each option here is set by the C++ compiler template.
C Preprocessor Command
Choose Tools => Options, and click the C/C++ tab. Then click the Mixed C/C++ tab. If it is not
enabled, change the compiler to a C++ version, one with “(C++)” at the end of the name.
The “C preprocessor command” option specifies the command to be used to preprocess C source files
in a C++ environment.
C Compile Command
Choose Tools => Options, and click the C/C++ tab. Then click the Mixed C/C++ tab. If it is not
enabled, change the compiler to a C++ version, one with “(C++)” at the end of the name.
The “C compile command” option specifies the command to be used to compile C source files in a C++
environment.
Command used to compile C files in C++ environments. Its default value is set
by the C++ compiler template.
C Parser Flags
Choose Tools => Options, and click the C/C++ tab. Then click the Mixed C/C++ tab. If it is not
enabled, change the compiler to a C++ version, one with “(C++)” at the end of the name.
The “C parser flags” option specifies the flags to pass to the QuickParser when parsing C source files in
a C++ environment.
Flags to pass to the EDG parser when parsing a C file in a C++ environment.
Its default value is set by the C++ compiler template.
The compile flag (such as –c) is used to detect a compilation command when parsing build output in the
Compiler Integration wizard.
Arguments matching these flags will not be captured by the Compiler Integration Wizard. For example,
/OUT: . If a specified flag ends with **, then the ** is removed and the flag is considered to take an
argument which will also not be captured.
<flag> is the text used to identify a compilation command flag that you do
not want to have captured when parsing build output. Its default value is set
by the compiler template.
Instructions for debugging the test harness. Click in the field to enter edit mode and enter the name or
the path to the help file. The information is stored in the CCAST_.CFG file.
Instructions for executing the test harness. Click in the field to enter edit mode and enter the name or
the path to the help file. The information is stored in the CCAST_.CFG file.
Configurator Arguments
Choose Tools => Options, and click the C/C++ tab. Then click the Compiler Integration tab and the
Compiler Integration Wizard options sub-tab.
Enter compiler arguments for python configurator. Click in the field to enter edit mode and enter the
Indicates the compiler family being used. When using the VectorCAST compiler configuration utility,
this field is automatically generated. The information is stored in the CCAST_.CFG file.
Source of default compiler options. Use the pull-down menu to select the option. Options are BUILT_
IN_TAG, PY_CONFIGURATOR, and CONFIG_FILE_62 and CONFIG_FILE_63. The information is
stored in the CCAST_.CFG file.
Assembler Command
Choose Tools => Options, and click the C/C++ tab. Then click the Compiler Integration tab and the
Target Compiler Options sub tab.
The command and options to call the assembler with the startup file.
<command> is the command to call the assembler. Its default value is set by
the compiler template.
Precompile Command
Choose Tools => Options, and click the C/C++ tab. Then click the Compiler Integration tab and the
Target Compiler Options sub tab.
The command called before compiling the C/C++ test harness files. This command is only used if your
compiler has a two-stage compilation process. After the precompile command is run, a file with the pre-
compile extension is produced, and then the compile command is run on that file.
<command> is called before compiling the C/C++ test harness files. Its
default value is set by the compiler template.
Precompile Extension
Choose Tools => Options, and click the C/C++ tab. Then click the Compiler Integration tab and the
Target Compiler Options sub tab.
The files resulting from calling the precompile command have a file extension
<ext>. Its default value is set by the compiler template.
Startup File
Choose Tools => Options, and click the C/C++ tab. Then click the Compiler Integration tab and the
Target Compiler Options sub tab.
This option specifies the file(s) containing startup code for your target. The default value is set by the
compiler template. For some compilers, this option’s value includes several files.
To add a path, click the Add Path button . Browse to the location of the startup file, and click OK.
To modify a path, double-click it to make it editable. You can include environment variables in the
format $(ENV_VAR) by editing the CCAST_.CFG file. To delete a path, select it and click the Remove
Path button .
<file> is the path to a startup file, which may have spaces in its path. More
than one occurrence of this option may appear in the CCAST_.CFG file, one
line for each startup file.
The Coverage tab, Options sub-tab is used to set coverage options for unit test environments and
VectorCAST Cover environments. When used for Cover environments, these options are frequently
stored in the CCAST_.CFG file. Some options require reinstrumentation to take effect; others require
recompiling the test harness.
Coverage Perspective
Choose Tools => Options and click the Coverage tab, then click the Options sub-tab.
This option allows the user to view code coverage from either the Translation unit perspective or the
Source file perspective. The default is Translation unit perspective.
> Translation unit: Default. Selecting this option computes and displays coverage based on the
translation unit, as a result of preprocessing. Changing this option does not require
reinstrumentation.
> Source file: Selecting this option computes and displays coverage based on the Source file.
Base directories are required to enter Source file perspective mode. If this option is disabled,
choose Environment => Update Environment and then click Update to configure the Cover
environment for Base directories. Changing this option does not require reinstrumentation.
When True, this option directs VectorCAST to compute and display coverage
based on the source file. When False, coverage is computed and displayed
based on the translation unit, which is a result of preprocessing. The
default is False.
Three coverage I/O types are supported: Real-time, Buffered, and Animation. Changes to the Coverage
I/O type take effect upon coverage instrumentation.
l Real-time: Default. Coverage data is gathered as the test runs, and output to the results file
immediately. This I/O type is optimized.
l Buffered: Coverage data is stored and then output when the test finishes executing. This method
is useful is your target platform is slow. This I/O type is optimized. Buffered I/O gives better
performance, but requires more memory than Real-time coverage I/O. The buffered I/O type
requires that you add a call to flush the stored coverage data to the results file. At the end of your
source code, call VCAST_DUMP_COVERAGE_DATA() or you can set the option 'Dump
buffered coverage data on exit.' If using Buffered I/O with Coupling, see "Run Test Cases and
Import Trace Data" on page 291.
l Animation: This I/O type is not optimized. Instead, coverage data is gathered in the order
encountered, in preparation to animate the test result’s coverage in the Coverage Viewer. The
animated coverage results reveal the flow of control.
Type of coverage I/O that the instrumented harness will perform. The default
value is Real_Time.
The option “Save data in ASCII format in memory” enables VectorCAST to store coverage data in an in-
memory data array (inside the instrumented executable), instead of writing the data to disk. When this
option is set, VectorCAST calculates the size of the array needed, which depends on the type of
coverage and the complexity of any MC/DC expressions that are being tested. The size of the in-
memory array is configurable.
This option can be used with any Coverage I/O type, but the most benefit occurs when using buffered
Coverage I/O. By default, VectorCAST stores the coverage data in a set of data structures (not in
ASCII format) and then writes the data to disk when the application exits. With this option on, the data
are stored in ASCII (readable) format and then written to the in-memory data array by a call to VCAST_
DUMP_COVERAGE_DATA() in the source code of a Cover environment. In addition, you will need a
method to read the in-memory buffer after the instrumented application has exited. (You can do this by
printing the buffer’s data to the TESTINSS.DAT file.) This option is written to the CCAST_.CFG file.
It is not recommended to use this option when “Dump buffered coverage data on exit” is on.
When opening a VectorCAST environment prior to version 4.1j, a message appears instructing you to
re-instrument all source before initializing coverage.
If "Save data in ASCII format in memory" is True, VectorCAST must allocate a specific amount of
space to store coverage data in. The default value is calculated during coverage processing, and
usually is large enough to contain the data. However, the default value of the buffer may be too small if
your code contains large MC/DC expressions and you have tests that cover a majority of the sub-
expressions. In this case, the buffer may overflow and you should increase this option. See the file
INVALID_COVERAGE_LINES.LOG, in the environment directory, for more information.
To find the default size of the memory buffer, look in the file vcast_c_options.h, located in the
environment directory after instrumenting. The information is similar to:
#define VCAST_USE_BUFFERED_ASCII_DATA 1
#define VCAST_MAX_CAPTURED_ASCII_DATA 662
To increase the size of the buffer, set the new value for this option to some number greater than 662 (in
this example).
The number <bytes> tells VectorCAST how much memory should be set aside for
buffered ASCII data. This option should be modified only when using MC/DC or
DO-178B Level A coverage types.
This option is enabled only when the Coverage I/O Type is Buffered. Setting this option forces the
coverage data to be dumped to the TESTINSS.DAT file when the test harness or instrumented
application terminates (it essentially calls vcast_dump_coverage_data() for you). Because the
mechanism used is the system atexit() callback, destructor coverage data for global objects is included
in the dump. This option does not cause Coupling data to be dumped. See "Run Test Cases and Import
Trace Data" on page 291.
This option is enabled only when the Coverage I/O Type is Buffered. Setting this enables the vcast_
clear_coverage() function in the harness or instrumented application. You can call this function from
your source code while the instrumented application is running to clear the currently collected coverage
data from memory. This function can be combined with vcast_dump_coverage_data() in such a way
that you can dump the coverage data from memory to the TESTINSS.DAT file at any time during
application execution, then clear the coverage data, and repeat the process until you've collected all the
necessary coverage data.
When the “Use Static Memory Allocation on the Target” option is checked, VectorCAST uses a static
memory model when allocating memory for coverage data. By default, the instrumentation routines
allocate memory as needed (dynamic). If you choose the static memory model, you can specify limits
for the maximum number of subprograms and MC/DC expressions for which memory should be pre-
allocated.
This option tells VectorCAST that only static data allocation can be used on
the target platform. This is used when collecting coverage data. If this is
set to True, then use the options VCAST_MAX_MCDC_STATEMENTS and VCAST_MAX_
COVERED_SUBPROGRAMS to configure the static allocation.
To set the “Maximum covered subprograms” option, you must check the box next to “Use static
memory allocation” and use the Buffered Coverage I/O type.
The “Maximum covered subprograms” option tells VectorCAST how many unique subprogram calls it
should allocate memory for. If a test case or compound test case execution encounters 500 different
subprograms calls, and the option is set to 499 or lower, then a text error message is added to the
coverage data indicating that the limit was reached. This text error message is used to generate popup
messages and CLICAST output messages. This option is written to the CCAST_.CFG file.
To set the “Maximum MC/DC expressions” option, you must check the box next to “Use static memory
allocation.” You can use any type of Coverage I/O with this option.
The “Maximum MC/DC expressions” option tells VectorCAST how many unique MC/DC expressions it
should allocate memory for. If a test case or compound test case execution encounters 500 MC/DC
expressions with unique values for its parameters, and the option is set to 499 or lower, then a text error
message is added to the coverage data indicating that the limit was reached. This text error message is
used to generate popup messages and CLICAST output messages. This option is written to the
CCAST_.CFG file.
The “Uncovered line indicator” option allows the selection of a character to be used to mark "uncovered"
lines in the Coverage Viewer and coverage reports. It allows easy identification of uncovered lines in
the VectorCAST reports outside of the VectorCAST GUI.
The feature can be used with Statement, DO-178B Level A, B, and C coverage types.
Valid character choices are: '#', '$', '%', '&', or '+'. The default is "no character".
The uncovered line indicator. The default value is space, that is none.
Reinstrumentation is necessary after changing this option. Coverage => Initialize, and select
coverage type
The “Animation play speed” option enables you to change the speed of coverage animation progress.
By default, coverage animation proceeds at the rate of one line per second. To go faster, increase the
value. To go slower, decrease the value. The ratio 1.0 represents 1 line / 1 second.
The speed ratio for coverage animation. The default value is 1.0, which is 1
line per 1000 msec.
Instrumentation Options
Choose Tools => Options and click the Coverage tab. Then click the Options tab and the
Instrumentation Options sub-tab if it is not selected.
When this option is True, VectorCAST instruments the 'for' loop in Ada to allow both True and False
outcomes. Coverage for Ada 'for' statements will show both True if the loop condition has ever
evaluated as True, and False if it has ever evaluated as False.
The True condition is met if the loop is entered. The False condition is met if the loop is not entered, or if
the loop is exited prematurely due to an exit statement.
When this option is not set, only the True condition is checked.
The default value is True, meaning Ada "for" loops are instrumented with both
True and False possible outcomes.
When this option is True, the instrumenter will instrument all code that it processes. The default value is
True.
When this option is False, the instrumenter will skip instrumenting code that it determines to be
unreachable. This is often due to constants in branch decisions, such as an if(0) or while(0), or
code following a return statement.
For example, in the source code below on line 3, the expression if(0) is constant and evaluates to
false. That makes the following code unreachable and therefore that code is not instrumented when the
code is processed.
The default value is True, meaning the instrumenter will instrument all code
that it processes. When this option is False, the instrumenter will skip
instrumenting code that it determines to be unreachable due to constants in
branch decisions, such as if(0) or while(0), or code following a return
statement.
Enabling this option will add additional Function Call coverage in the metrics reports. Note that only the
following coverage types with Statement coverage are supported:
Note: Coverage must be re-instrumented after enabling this option to view the combined
coverage.
For more information on combining Function Call coverage and other coverage types, see the online
KnowledgeBase article "Combine Function Call Coverage With Other Coverage Types".
Enabling this option will add additional Function and Function Call coverage instrumentation. Note that
only the following coverage types with Statement coverage are supported:
Note: Coverage must be re-instrumented after enabling this option to view the combined
coverage.
Enabling this option will change coverage instrumentation to instrument blocks for statement coverage.
This reduces the amount of program memory used. This option is applicable for C and C++ source files
only.
This feature is intended for users who have limited program memory for their application. The option
applies when instrumenting for Statement coverage and any aggregate coverage type that includes
Statement (such as DO-178B Level B, which uses both Statement and Branch instrumentation).
When set, this option results in a smaller foot-print for coverage instrumentation because only the last
statement of a contiguous block of statements is instrumented when determining Statement coverage.
The intermediate statements are considered covered by inference, but data is not collected for them. As
a result, the TESTINSS.DAT files and the instrumented files are smaller.
When importing coverage from one VectorCAST to another, both environments must have the same
setting for the "Instrument blocks for statement coverage" option when the source files were
instrumented. If not, an error message is displayed and the results are not imported.
This option provides coverage reporting for the implicit "default" case in a switch statement.
By default, this option is off, resulting in no reporting unless the "default" case is explicitly provided in a
switch-case block. When set, this option causes an explicit "default" case to be added to the switch
statement in the Coverage Viewer and Aggregate coverage reports, will report on coverage of this case.
Setting this option will cause the implicit default case not provided at the
end of a switch-case block to be covered when instrumenting branch coverage.
The default value is False.
Setting this option to True causes a case statement reached by fallthrough to be treated as a covered
branch.
switch(foo){
case FIRST:
i++;
case SECOND:
j++;
}
Branch Coverage:
If the option is true, as in previous versions of VectorCAST, then when foo is FIRST, both case FIRST
and case SECOND are covered branches.
Basis Paths:
With the example above, if foo is FIRST, previous versions of VectorCAST would show zero covered
basis paths. With the option set to False, basis path coverage now recognizes paths that include the
fallthrough, so VectorCAST shows one covered path.
When this option is enabled, the instrumentation storage footprint is reduced for statement and branch
coverage (C/C++). The statement footprint is reduced by up to a factor of 8 and the branch footprint is
reduced by up to a factor of 4. This is accomplished by packing statement and branch points as bits
instead of bytes.
When this option is disabled, a byte array is used instead, offering thread-safe instrumentation for
Statement and Branch coverage and it may improve test execution performance, due to fewer
operations to store the coverage data.
Note that when this option is False, more memory is used during instrumentation or the size of the
instrumented executable may be increased, which may impact users executing on a Target.
This option reduces memory overhead for embedded targets with limited memory by using a global
buffer instead of a buffer per unit.
Enabling or disabling this option requires all source code to be re-instrumented. When this option is
enabled, instrumenting a single source file may require you to recompile not just the instrumented
source file but all the others that contain higher instrumentation IDs.
point.
> Instrumentation storage is reduced for each coverage kind via global
buffers.
> Buffered Coverage I/O no longer uses linked lists to coverage buffers.
This option decreases the size of the coverage data generated by the instrumented source code by
using a binary format.
Coverage TESTINSS.DAT created using binary mode can only be imported into VectorCAST when
binary coverage is enabled otherwise, it will return a coverage import error.
> From VectorCAST Options, Coverage Tab, set Coverage I/O type to Buffered.
> Enable the option Dump buffered coverage data on exit.
> From the Instrumentation Options sub-tab, enable Reduce memory overhead using global
buffers and Dump coverage in binary format.
When buffered coverage I/O and global buffers are in use, enabling this
option will produce a raw binary dump of the coverage to the "TESTINSS.DAT"
file, eliminating the code and processing overhead of the ASCII data
conversion. In addition, in most cases this file will be much smaller in size
than the ASCII equivalent. The default value is False.
When set, catch blocks are considered to be branches for coverage purposes.
Enabling this option will provide code coverage for C/C++ blocks as if they
are branches. The default value is True.
This option pertains to C/C++ units in Cover and unit test environments. When this options is false,
then any function defined in a header file is covered in only one UUT, assuming the option “Provide
code coverage in headers” is on, and provided that the function is not inlined by the compiler or called by
multiple UUTs.
Setting this option causes VectorCAST to identify (inline) functions that appear in multiple units and to
show the same coverage for each such function in all the units in which it appears. The replication of
coverage data across unit occurs dynamically during the processing of the coverage data. The
replication will not be explicitly recorded in coverage scripts, but this option can be utilized when
importing results in a different environment all long as the destination environment contains the unit for
which the original coverage data was recorded and the units in the destination environment were
instrumented with the option enabled.
Enabling this option will cause inline functions that appear in multiple
units to show the same coverage in each. The default value is FALSE.
When this option is checked, VectorCAST causes coverage instrumentation to be performed for
initialized variable declarations in functions in C units. This option can be used with all C compilers,
including those that do not permit mixing executable statements with variable declarations. However,
setting the option to True causes VectorCAST to instrument only those variable declarations that are
initialized; those that are not initialized are left uncoverable. Therefore, if you are using this option, you
may see a reduction in the total number of coverable statements.
For C++ units, all declarations are instrumented regardless of the option's setting.
This option requires that you specify a preprocessor on the Tools => Options dialog, C/C++ tab.
By default, the instrumenter uses the functions defined in the c_cover_io.c file for each instrumentation
point in the source file. When this option is set, the instrumenter instead uses the macros defined in the
local version of the c_cover.h file, located in the environment directory. Changing the value of this
option takes effect upon instrumentation.
When this option is True, VectorCAST generates basis path tests treating constant if, while, do-
while, and for conditions, such as "if (0)", the same as non-constant branches. When the option
is False, then constant conditions for those statements do not add to the cyclomatic complexity.
VectorCAST will generate basis path tests treating constant if, while, do-
while, and for conditions, such as "if (0), the same as non-constant
branches. If this option is not selected, then constant conditions for those
statements do not add to the cyclomatic complexity. The default value is
True.
When this option is enabled, global data initialization expressions will be treated as a write. An
initialization expression in a component is enough to create a data couple with other components.
When this option is enabled, VectorCAST ignores data couple access errors for a read following an
address of operation.
When this option is enabled, data couple access errors for reads following an
address of operation will be ignored. The default value is False.
Note that for templates in headers, you must include the header files which define the templates and
any source files which instantiate them in the allowlist when configuring your Base Directory.
When this option is enabled, the Aggregate Coverage Report uses the aggregate coverage from all
instantiations and displays a color strip to indicate the level of coverage (green = covered, yellow =
partial coverage, red = no coverage). Coverage can be viewed on the template functions themselves,
which are comprised of all instantiations. Coverage can also be viewed on the template instantiations
by expanding the template functions and clicking on the +/- symbol located to the left of the Line number
column.
Enabling this option allows the Coverage viewer to display a Templates tab within the Line Details
pane, which lists the template instantiations for the selected line and allows the user to identify which of
these instantiations are covered. In the example below, 4 possible coverable statements are shown:
one each for types char, double and int, and one for type T.
A limitation exists when template functions are instantiated using local types. A local type is a user
defined type within function scope. In these cases, the coverage for these instantiations cannot be
differentiated from each other. VectorCAST combines these instantiations into a single group which are
used to show the merged coverage. The group is labeled using the template function name.
Groups representing instantiations from local types for a given template function are identified with an
asterisk in the Combined column. Hovering over a combined template instantiation lists each
instantiation comprising the group.
The best way to confirm if you have code containing combined instantiations is to generate either the
Metrics Report or the Aggregate Report and verify that the report contains a Combined Template
Instantiations table.
The Combined Template Instantiations table lists template instantiations that cannot be tracked
individually for code coverage. The table lists instantiations using their parameterized name. The
example below shows the results for the instantiations in the array.h file.
Note that most cases do not have combined instantiations. The Combined column is only displayed
when combined instantiations exist. In those instances without combined instantiations, the table and
Combined column are omitted.
MC/DC Options
Choose Tools => Options and click the Coverage tab. Then click the Options tab and the MC/DC
sub-tab if it is not selected.
When this option is selected, while instrumenting with MC/DC, if a conditional statement contains more
conditions than possible to instrument, then VectorCAST aborts instrumenting that file. For C++, the
absolute maximum is 52, unless VCAST_HAS_LONGLONG is false or VCAST_UNSIGNED_LONG_
MCDC_STORAGE is true, which makes the maximum 31. For Ada, the maximum is 26.
This option primarily affects customers in the Medical Device industry. To more closely follow the
requirements of IEC-62304, a Coverage option named "Simplified condition coverage" is implemented.
When this option is True, entry points, decision outcomes, and all condition values are reported, but
equivalence pairs are not reported.
The option automatically takes effect when the environment is instrumented for IEC-62304 (Medical)
Class C coverage type.
The option can also be set by selecting Simplified Condition Coverage on the MC/DC sub-tab and
then instrumenting the environment for MC/DC or Statement+MC/DC coverage type.
When the environment is instrumented with this option True, the user is not permitted to perform the
MC/DC Test Case Analysis, as this action requires the MC/DC Equivalence Pairs data to be available.
The default value is FALSE except when in the IEC-62304 (Medical) industry mode, in which case it is
TRUE.
This option performs MC/DC analysis and equivalence pair reporting using MC/DC Masking as defined
in the FAA CAST Position Paper-6. This option is of interest to those projects performing DO-178B and
DO-178C software certification.
Use masking MC/DC to dertermine pairs for C and C++ files instead of unique
cause MC/DC. The default value is False.
For C++, the absolute maximum number of MC/DC sub-conditions supported is 52 by default. Setting
this option reduces the maximum number to 31. The config file value for this option is VCAST_
UNSIGNED_LONG_MCDC_STORAGE.
The “Maximum subconditions for MC/DC table pre-calculation” option enables you to limit the
application of MC/DC analysis to conditions that contain a particular number of sub-conditions. The
default value for this option is 8. Any statement containing more than 8 conditions will not be analyzed.
The maximum value for this option is 26.
This option is written to the Cover environment's .vcp file as MAX_MCDC_CONDITIONS. Its default
value is 8.
When VectorCAST generates equivalence matrices for an expression, if the expression has more than
this many subconditions, no table information will be displayed. Pair information will still be calculated.
The default value for this option is 20. The maximum value for this option is 52.
This option allows smaller conditions to use much less run-time memory. For conditions with a sub-
condition count under a configured threshold, a bit-packed array will be used to store run-time data.
Conditions exceeding the configured threshold will continue to use the legacy approach of a balanced
binary tree to store run-time data. The bit arrays are allocated to allow 100% run-time coverage without
the possibility of exhausting the MC/DC statement pool.
This option is enabled by default, but may be disabled to revert to using the previous approach of
balanced binary trees for all MC/DC conditions.
Enabling or disabling this option or changing the maximum conditions to use this method requires all
source to be re-instrumented.
See "Optimized MC/DC Storage Threshold" on page 209 for more information on defining which storage
technique is used for each expression instrurmented.
The option "Optimized MC/DC storage threshold" defines which storage technique is used for each
expression instrumented. For conditions with a number of sub-conditions fewer than this value, the
optimizing bit-array is used to store run-time data. For conditions with a number of sub-conditions
greater than this value, the legacy approach of a balanced binary tree is used to store run-time data.
The option may be set in the range of 0 (condition with a single constant) to 8, with 8 being the default.
The value of this option controls the underlying storage technique for MC/DC
coverage data. For conditions with a number of sub-conditions greater than
this value, VectorCAST uses a self balancing tree for storage. For conditions
with a number of sub-conditions equal to or smaller than this value,
VectorCAST uses a bit array. For most architectures, the "break even" point
for memory usage is 8 sub-conditions, which means for a condition with 9 sub-
conditions, it's possible to use less memory using the balanced tree
approach. The default value is 8.
Miscellaneous Options
Choose Tools => Options and click the Coverage tab. Then click the Options tab and the Misc sub-
tab if it is not selected.
Set this option to prevent VectorCAST from casting conditional expressions to boolean when
instrumenting for Branch or MC/DC coverage.
The “Coverage field width” option specifies the width, in characters, of the left column of the Coverage
Viewer. Increase this number if you have a large number of subprograms in the unit or a subprogram has
a large number of statements or branches. The default value is 8 characters.
The width (in characters) of the left margin of the coverage viewer for C/C++
files instrumented prior to VectorCAST 2020 and all Ada files. Increase this
number if you have a large number of subprograms or a subprogram with a large
number of statements or branches. Its default value is 8.
This option allows you to adjust the coverage database cache size. Increasing the value may increase
performance, but care should be taken not to set the value greater than available memory. Changes to
this option take effect when re-opening an environment.
Increasing the maximum coverage database cache size may increase performance,
but take care not to set it to a value greater than the amount of available
system memory. For a system with 2GB, a 1000 MB maximum cache is probably
sufficient. Changes to this option take effect when reopening an environment.
When this option is set to True, VectorCAST tells the compiler to treat asm functions like inline
functions so it can skip over them when adding defined functions to the coupling information. The option
is True for DIAB and Green Hills compilers, and otherwise False.
This option should be set to True if the compiler treats asm functions like
inlines. If a definition of the asm function may be in a header #included
into multiple source files in the same executable, then this option should be
set to true. This option is used when determining couples for control
coupling. The default value is True for DIAB and Green Hills compilers, and
otherwise False.
This option is set to True by default, meaning keep the intermediate files.
Users needing to save disk space can set the option to False, which directs VectorCAST to delete the
preprocess output, usually in files named <unit>.1.tu.preproc, which can get quite large. For the
option to take effect, all sources must be reinstrumented.
This option must be True to enable full support for Change-Based Testing (CBT), Covered
By Analysis (CBA), and probe points.
The value for this option is False by default. When this option is set to True, VectorCAST retains the
various debug files that are produced during instrumentation, including, for example:
<unit>.c.1.vcinst.in and <unit>.c.1.vcinst.out.
Set this option to preserve debug files produced during instrumentation. The
default value is False.
The “Post-process instrumented files command” option provides a means to execute a batch file
(Windows) or shell script (Linux) once after each unit in the environment is instrumented. This feature is
useful for performing post-processing on the unit’s instrumented source code. Specify the full or relative
(to the environment directory) path to the batch file or shell script in the option. The full path to the unit is
passed as an argument to the script.
Once the path to the example script is specified in the “Post-process instrumented files command”
option and the environment is instrumented for coverage, this example script causes the file path to the
unit and the instrumented file itself to be appended to the file report.txt, located in the environment
directory. The first few lines of report.txt are shown below. These lines are repeated once for each
unit in the environment.
C:\vcast_tutorial\CPP\Tutorial_cover\manager.cpp
/* VectorCAST/Cover */
#ifndef VCAST_CONDITION_TYP
#define VCAST_CONDITION_TYP int
#endif
#ifdef __cplusplus
extern "C" {
#endif
/*
---------------------------------------
-- Copyright 2019 Vector Informatik, GmbH --
-- East Greenwich, Rhode Island USA --
---------------------------------------
*/
...
If you have an external text editor defined on the GUI tab, clicking the Edit File button ( ) opens the
script in the specified external text edit.
Similar to the "Post-process instrumented files command" option, this option allows customization of
the vcast_c_options.h file after VectorCAST instruments or un-instruments the source files and
updates the vcast_c_options.h file. This option applies to Unit Test and Cover environments with
C/C++ source files.
If you have an external text editor defined on the GUI tab, clicking the Edit File button ( ) opens the
script in the specified external text edit.
This option allows you to specify that coverage instrumentation for specific functions or whole files be
suppressed in C/C++ source files.
Click the button to add a file or function to be suppressed during instrumentation, using the
following syntax:
For example:
Instrumentation for two source files in the Cover environment having the same name but residing in
different locations is suppressed in both files.
The following examples show which functions are denylisted based on the rule, Rule --
Denylisted.
To remove a file or function from the list, highlight the item and click the button.
The following sections discuss several ways to suppress test code coverage.
If you have a directory that only contains test code, the option VCAST_SUPPRESS_COVERABLE_
FUNCTIONS should be used to suppress coverage for all of the coverable functions defined in that
directory.
This path could also be made portable with the following rule, instead: */repo/progs/*:*
Note: Note: This rule would match any path that contains repo/progs (e.g.
/home/repo/progs and /home/repo/repo/progs).
If an environment variable was used for a Cover environment base directory, it is recommended to use
the rule $(REPO)/progs/*:*, if the environment variable REPO was set to /home/repo.
If your test code files do not reside in a specific directory, code coverage can be suppressed for
individual files.
For example, the rule $(REPO)/src/unit_1_tests.cpp should be used to suppress test code
coverage for the file unit_1_tests.cpp in the repository's src directory. If test code files follow the
same naming convention, the rule $(REPO)/src/*_tests.cpp can be used to suppress code
coverage for all files ending in _tests.cpp in the src directory.
There are two ways coverage can be suppressed for inlined test code in user source files.
First, if the test function my_unit_test_1 is defined in repo/src/source.cpp, the rule $(REPO)
/src/source.cpp:my_unit_test_1 can be added to the Suppressed Coverable Functions list to
suppress coverage for that test function. If there are several test functions in that file that all start with
the prefix my_unit_test_, the rule $(REPO)/src/source.cpp:my_unit_test_* should be
used to suppress coverage for all of them. If all unit test functions in the repository start with the prefix
my_unit_test_, the rule $(REPO)/*:my_unit_test_* should be used.
Second, if inlined test code functions do not adhere to a naming convention or the user does not wish to
modify the Cover environment configuration, the vcast_dont_instrument comments can be used
to suppress coverage for the test code defined within them.
In the example below, the user added the vcast_dont_instrument comments to their source file to
suppress the functions one_test and test_another:
//vcast_dont_instrument_start
void one_test()
{
MY_ASSERT( fn_1() );
}
void test_another()
{
MY_ASSERT( fn_2() );
}
//vcast_dont_instrument_end
In order to collect coverage for header files, all source files that include them must be added to the
Cover environment. If you wish to collect coverage for a library that is only comprised of header files,
the option VCAST_SUPPRESS_COVERABLE_FUNCTIONS should be used to suppress coverage
for all coverable functions in the source file directory(ies).
For example, if your repository contains the directories progs, src, and inc, and the inc directory
contains your header library, the rules $(REPO)/progs/*:* and $(REPO)/src/*:* should be
added to the Suppressed Coverable Functions list. This will only collect coverage for the header library
in inc.
Result Changes
Choose Tools => Options and click the Coverage tab. Then click the Options tab and the Result
Changes sub-tab if it is not selected.
By default, when coverage data is removed from data sets the empty data sets will be removed from
VectorCAST. To override this functionality, select the checkbox next to the data sets you wish to
retain. There are three data set types:
This option specifies a custom directory for the coverage TESTINSS.DAT files generated by the
instrumented executable. This option provides the ability to organize and maintain the coverage result
files created during test execution. The specified directory can be absolute or relative and can include
All environment variable substitution is done within the instrumented executable. This allows changing
the coverage results directory per invocation of the same instrumented executable.
If a relative path is specified, the coverage result files are saved relative to the current working directory
of the execution program.
If a directory is not specified, the coverage result files are saved to the current working directory of the
executing program.
Save coverage result files to the specified directory. This option provides
the ability to organize and maintain the coverage result files created during
test execution. The specified directory can be absolute or relative and can
include environment variable substitution syntax, e.g. $(ENV)/directory.
This option tells VectorCAST to open the coverage data file for appending, rather than always creating
the data file.
By default, coverage data generated by the instrumented executable is stored in a file named
TESTINSS.DAT. To avoid overwriting this file during test execution, the filename can be made unique
by selecting one of the following options from the Coverage result filename drop-down menu to
append to the default file name:
> Default
> Append Epoch
> Append PID
> Append PID and Epoch
When this option is set to True, the content that would otherwise go to the TESTINSS.DAT file is sent
to Standard Output after executing the instrumented application. This output can then be redirected to a
results filename, such as MY_RESULT.DAT.
The option takes effect after instrumentation and is available for Cover environments with C/C++
source files only.
When this option is True, the content that would otherwise go into the
TESTINSS.DAT file is sent to standard output after executing the instrumented
application. This option can be set for Cover environments with C/C++ source
files only, and it takes effect after instrumentation. The default value is
False.
When this option is true, the empty statement, ';', is considered coverable.
This option will cause separate source files which are included into the
original source unit to be instrumented when initializing code coverage for
C/C++ files. The default value is True.
Setting this option will cause the implicit default case not provided at the
end of a switch-case block to be covered when instrumenting branch coverage.
The default value is False.
The Coverage Viewer tab enables you to control the fonts and colors used in the Coverage Viewer. By
default, covered lines are shown in green, uncovered lines are shown in red, and non-executable
statements are shown in black. It is recommended that you use a monospaced font for the Coverage
Viewer (e.g. Courier) as that allows the tables and report to be aligned properly.
Partially Covered Lines – lines that have multiple outcomes where some but not all outcomes have
been tested.
Uncovered Lines – lines or branches that have not yet been executed.
The Font... button for each line type and the Change All Fonts... button enable you to select the font,
font style and size for one or all line types in the Coverage Viewer. The Default Fonts button enables
you to revert to the default font settings. Default font settings are: Courier font, Normal Font style, 10 pt
Size.
Click the Default Fonts button. This option returns the font and color for each line type to the default
settings: Courier font, Normal Font style, 10 pt Size.
The Text Color... buttons enable you to change the text color for each of the five line types.
The Background Color... buttons enable you to change the background color for the five line types.
The Selection... button enables you to change the background color for selection highlights. This
setting can be reset to the default color by selecting View => Default Layout from the Menu Bar.
The Find... button enables you to change the background color for search highlights. This setting can
be reset to the default color by selecting View => Default Layout from the Menu Bar.
The MC/DC Table... button enables you to change the background color of MC/DC Tables in the
Coverage Viewer. This setting can be reset to the default color by selecting View => Default Layout
from the Menu Bar.
By default, adding a single test result file to a Cover environment asks for a name to be entered. When
multiple test results are added at once, VectorCAST gives you a choice to automatically name them
according to their filenames. This option causes VectorCAST to automatically name a single test result
using the filename just as is done for multiple test result files. For example, if the test result file
TESTINSS.DAT file is added and this option is on, the name of the test result is TESTINSS.
When this option is set VectorCAST will automatically generate the result
name for this result file. Its default value is No (0).
Note: For ADA, you must uncomment the appropriate block of code in the OPEN_FILE routine
of 'vcast_cover_io.adb' before compiling the instrumented source files..
By default, this option is false, which means that the coverage results gathered during your application
execution are written to a new TESTINSS.DAT file each time you invoke the application. If you do not
rename the TESTINSS.DAT file in between application invocations, then data is overwritten. Setting
the option to true causes VectorCAST to append new coverage results to the same TESTINSS.DAT
file across consecutive invocations of your application.
When set to 1 (True), it adds the line #define VCAST_APPEND_TO_TESTINSS 1 to the file vcast_c_options.h
in the environment directory. which is #included by the file c_cover_io.c/cpp. Alternatively, you could
use the compile define VCAST_APPEND_TO_TESTINSS when compiling instrumented source files
into the instrumented application, or you can add this define as a compilation argument for a source file
(Environment => Edit Source Options).
After changing this option, you must recompile the instrumented executable.
This option tells VectorCAST to open the coverage data file for appending,
rather than always creating the data file. The default value is False..
This option allows you to set the unit number applied to the first source file added to a Coverage
environment. This option is only necessary if you intend to combine coverage data files
(TESTINSS.DAT) from several Cover environments into one Cover environment because the unit
number is used in the TESTINSS.DAT file and there will be some collisions when several units are all
number 1. Before adding any source files, specify that the first unit added to a particular environment
should start with <num>, with the maximum being 100,000.
Setting this option is not necessary if you are importing from another environment (Coverage => Import
Results from Environment) or importing a coverage script (Coverage => Import from Script) because
VectorCAST does the unit-number translation for you.
This sets the starting unit number to be used for units added to the coverage
project. Note, this can only be changed prior to adding units to the project.
This option causes VectorCAST to put the coverage utilities in a file other than the instrumented unit.
Set this option to the file extension you want to use (such as .ads). The filename generated is vcast_
types_<unit number>.<file ext>, one for each unit instrumented.
Specify that the coverage utilities be placed in a unit of type <ada file
ext> rather than the instrumented unit.
Add/Remove/Default Extensions
A source file is processed based solely on the source file extension – if a source file extension does not
match any extension in the lists, the file will not be processed. The Add and Remove buttons enable
the user to manipulate the lists. The Default buttons reset the lists to the standard language file
extension as shown above.
If your compiler uses a different extension than the default extensions provided, additional extensions
can be added using the Add button. When adding source files recursively, any file matching one of
these extensions is recognized as a source file.
The Content sub-tab is used to change the way VectorCAST displays test execution reports and
coverage reports on an environment-wide scale. Pass your cursor over any of the options to see an
explanation of that option in a tool-tip.
Set to true if you want the Metrics report to be sorted by search directory,
rather than by unit alphabetically (default). The default value is False.
annotated source code for all units to be included in the Aggregate Coverage Report, even if a unit has
no coverage data.
In the Aggregate Coverage Report, include the annotated source code for all
units in the environment, even if a unit does not have coverage data. If
false, “No coverage data exists” is displayed for those units instead. The
default value is False.
The report contains the following information which the user may use to refactor the expression so that
VectorCAST is able to instrument it completely:
Note: Uninstrumented expressions may be due to the user having turned off MC/DC
instrumentation by setting options such as "Instrument logical expressions in assignment
statements" to False. Contact VectorCAST Technical Support for clarification on the effects of
instrumentation options on the content of the Uninstrumented Expressions report.
The default value is FALSE, except for Industry Mode "DO-178 B/C (Avionics)",
for which it is True.
When True, this option adds a section to the Metrics report that identifies
the unit, subprogram, and line number of any branch or MC/DC expressions that
have constant values, such as "if (1)". The default value is False.
The other choices for filtering reduce the amount of detail in the report. Choosing "Include only units"
(Subprogram_Detail) causes the Metrics report to show only the unit names and their totals for the
number of subprograms, complexity, and coverage.
Choosing "Show only Grand Totals" (Unit_Detail) causes the Metrics report to show only the very last
line, the GRAND TOTALS for each column.
Note: If the option "Sort Metrics report by directory" is also set, then the "Show only Grand
Totals" setting causes the Metrics report to have a single GRAND TOTAL line for each
directory.
Specify which results to filter out of the Metrics Report. The default value
is No_Filtering.
Copies the contents of the 'Test Notes' box in the Test Case Editor to the
Test Case Management Report. Its default value is FALSE.
Show the VectorCAST version in the configuration section of the reports. The
default value is False
For example, setting this option to ls -l (on Linux) causes the following information to be added to the
two reports:
The full path to each UUT is derived from the Source directories used in the environment.
Command executed for each source file in the test environment, and a section
added to the Test Case Management and Full reports listing the name of each
unit in the test environment and the information generated by this command.
If this command is empty, then the File Version section fo the report will
not be created. <command> is a quoted string. It has no default value.
To use the template in test case Notes, first specify the template file. If the Notes tab of a test case is
empty and a template file is specified, then the template is automatically loaded when the test case is
opened. You can also right-click in the Notes tab and choose Import template. The text from the
template file appears in the Notes tab at the cursor location.
This option allows you to enter a path to a text file that contains a
template for the Notes section of the test case.
Report Header
This option prepends a text string to the title or adds a new section to the Full Report, just below the
title, to include the contents of a file in TEXT or HTML format. Input can be a text string, or a file
containing several lines.
If an HTML file is provided, the contents should provide a <div> section with a table. For example:
<div class='report-block'>
<h2>Additional Data</h2>
<table class='table table-small'>
Prepend a text string to the title or add a new section to the Full Report,
just below the title, to include the contents of a file in TEXT or HTML
format. Input can be a text string, or a file containing several lines. If an
HTML file is provided, the contents should provide a <div> section with a
table.
For example, use the text "N/A" for blank cells in reports to indicate that no Input Value or Expected
Value is intended in this position.
Text to show in blank cells in the Execution and Test Case Data report
sections. The default value is unset.
The Format sub-tab is used to change the formatting options for HTML and text reports. Pass your
cursor over any of the options to see an explanation of that option in a tool-tip.
Background color used for passing data cells in tables in the UI. The default
color is #ccffcc, a light green.
Background color used for failing data cells in tables in the UI. The default
color is #ffffcc, a light pink.
Background color used for partially passing data cells in tables in the UI.
The default color is #ffffcc, a light yellow.
Report Format
Choose Tools => Options and click the Report tab, then click the Format sub-tab.
The Report Format option determines if reports are displayed in HTML or Text. The default setting is
HTML. If you choose HTML, you can view the reports within VectorCAST or in an external browser. If
you choose text, you can view them within VectorCAST or in an external text editor.
Note that a test does not need to be re-executed in order to see the Execution report in a different
format.
Output format for VectorCAST reports: HTML or TEXT. The default value is
HTML.
To apply a custom CSS file to use in the HTML reports, enter the path to the custom CSS file.Your
custom style sheet is appended when reports are generated and the contents of the custom style sheet
are embedded in the <style> section following the default styles, when the reports are generated.
If the option is not set or the file cannot be found, the default CSS is used. The default CSS is located at
$VECTORCAST_DIR/python/vector/apps/ReportBuilder/css.
The Text sub-tab has options that are honored when the Report Format is Text.
The default Unit column width is 19 characters. To change the width, specify another size in the spin
box.
Width (in characters) of unit column in text reports. Its default value is
19.
The default Subprogram column width is 21 characters. To change the width, specify another size in the
spin box.
The default Testcase column width is 24 characters. To change the width, specify another size in the
spin box.
Width (in characters) of testcase column in text reports. Its default value
is 24.
The default Date column width is 11 characters. To change the width, specify another size in the spin
box.
Width (in characters) of date column in text reports. Its default value is
11.
The default Results column width is 13 characters. To change the width, specify another size in the
spin box.
Width (in characters) of result columns in text reports. Its default value is
13.
The default Complexity column width is 10 characters. To change the width, specify another size in the
spin box.
Width (in characters) of complexity column in text reports. Its default value
is 10.
The default Coverage Results column width is 18 characters. To change the width, specify another size
in the spin box.
Width (in characters) of coverage result columns in text reports. Its default
value is 18.
The default Notes column width is 30 characters. To change the width, specify another size in the spin
box.
Width (in characters) of notes columns in text reports. Its default value is
30.
Delimiter
The character separator for the Cover report created by the CLICAST report commands:cover
report csv_metrics and reports alternate. Enter the character in quotes, as in “@”. To enter
a tab, use “\t”. Valid delimiters are: ? , ' ; | { } [ ] @ ~ # $ _ \t \n
Character separator used in two CLICAST report commands cover report csv_
metrics and reports alternate. The default value is comma “,”.
This option will cause the text in the notes section to wrap at the first
space before 80 characters. The default value is FALSE.
Change-Based Testing
Change-Based Testing (CBT) automatically identifies the minimum tests that must be run for each
code change. VectorCAST scans the code for changes since the last test run, identifies the sub-set of
tests that are affected by the change and automatically re-runs only those tests.
Change-Based Testing results in greatly reduced test time, allowing for more frequent testing, and
ensuring that bugs are fixed when they are introduced, instead of during a full test cycle at a later date.
To perform an incremental build, right-click on a node in the Project Tree and select Build/Execute =>
Incremental from the context menu. Upon completion, the Manage Incremental Rebuild Report is
produced and displayed in the MDI window.
After an incremental rebuild the Manage Incremental Rebuild Report is produced to summarize the
results. The report provides the status from the Incremental Rebuild, the number of tests preserved for
that environment, and the number of tests that required re-execution.
VectorCAST CBA provides the ability to do code analysis directly within VectorCAST using the
Coverage Analysis Editor and combines the test and analysis coverage metrics in a single report.
VectorCAST CBA can also import analysis files, including those generated by third party tools.
These examples are not intended to be step-by-step tutorials, but rather an overview of the tool features
that are applicable to each workflow. All examples were run using Windows.
Note: For some time, we have been updating VectorCAST to use an alternate coverage viewer
mode called Source File Perspective (SFP). Normally coverage is viewed using the pre-
processed file or translation unit. As an alternative to Translation Unit (TU) viewing, you can
choose to view the coverage from the original Source File Perspective. There are still more
functions in VectorCAST to be updated for this viewing mode. But if you find this view valuable,
please let us know as we continue to update other features of VectorCAST to utilize this new
viewing mode.
CBA has been updated to use SFP as an option. A section, "Using Coverage Analysis With
SFP" on page 255, has been added to end of this chapter to show the workflow using this new
perspective.
source file. To add Coverage Analysis for a selected source file, select the CBA button on the
Toolbar. Clicking the CBA button requires a source file to be selected. Alternatively, right-click a source
filename in the Source Files pane and select Add Coverage Analysis from the context menu.
The Coverage Analysis Editor opens in the MDI Window and a Covered By Analysis node displays in
the Results pane. A CBA data file is created for the selected source file and this Analysis result
displays beneath the Covered By Analysis node. When an Analysis result is first created, it is empty
and contains no data. Empty Analysis results display the icon, even if they contain notes or
requirements. Each CBA data file corresponds to a single source file, but you can create multiple
Analysis results per source file.
In the Coverage Analysis Editor, the Notes tab on the right is a free-form text editor allowing you to
annotate the associated analysis. Use the Requirements pane to trace the Project Requirements and
Test Case Requirements associated with the selected code. The Requirements tab is populated by
VectorCAST Requirements Gateway. Use the Save button to save inputs.
Statement Coverage
With Statement coverage instrumented, the Coverage Analysis Editor displays boxes on the left for
statement outcomes that are uncovered. To mark a statement or condition as "considered covered",
select the check box. Lines covered by analysis are displayed in blue.
In the Coverage Analysis Editor, a check box does not appear next to a line if it is already covered by a
test result. Regular test results, if they are present in the environment, take precedence over CBA
results.
Branch Coverage
When Branch coverage is instrumented in the environment, the Coverage Analysis Editor displays
each subprogram with a True branch (T) for the entry result, and True and False branches (T) (F) for
each expression.
To mark a condition as having the True branch covered, select the check box in the (T) column. The "(T)
" is displayed in blue. The branch is now partially covered because the True branch is Covered By
Analysis, and the False branch is not covered.
To mark an expression as having the False branch covered, select the check box in the (F) column in
the Coverage Analysis Editor. The "(F)" is displayed in blue.
As with Statement coverage, if either or both of the True and False branches is already covered by
regular test results, then the check box is not available in the Coverage Analysis Editor, and the
expression shows yellow if partially covered, or green if already fully covered.
MC/DC Coverage
To add coverage analysis for a condition with multiple sub-conditions when MC/DC Coverage is
instrumented, you annotate that one or more rows of the Equivalence Pairs table is covered.
To access the equivalence pair table, click the arrow to the left of the condition. The truth table
opens and a check box is displayed for each row. When a checkbox is selected, the associated sub-
condition is considered "covered" and displayed in blue.
Double-click an existing Analysis result in the Test Results pane to open the Coverage Analysis Editor.
Alternatively, right-click on the Analysis result and select Edit Requirements/Notes from the pop-up
menu. The contextual menu allows you to Remove, Rename, Update and view the associated
properties of the Analysis result. Removing an Analysis result deletes its data, notes, and requirements
from the environment.
Lines covered only by CBA results are displayed in green (indicating the line is covered) with a blue "A"
(indicating the line is covered only by CBA).
Lines covered by both regular execution results and CBA results are displayed in green (indicating the
line is covered) and with a blue asterisk "*" (for statement) or a blue "T" or "F" (for branch), which
indicates that it is also covered by CBA.
Use the buttons provided in the Covered-By-Analysis group box to change the font, text color and
background color. When changes are complete, select the Apply button to save the changes to the
.CFG file.
Alternatively, from the Menu Bar select Coverage => Delete Coverage Analysis. This menu item
removes CBA Analysis results and their Notes and Requirements from the environment.
CBA Analysis result with the icon indicating it is empty , double-click the icon to view the
information in the Notes.
To access the Covered By Analysis Report, select Environment => View => Covered By Analysis
Report from the Menu Bar.
The CBA Report has two sections: The Covered By Analysis, Per Line section and the Covered By
Analysis Result File section(s).
The Covered By Analysis, Per Line section lists each covered line in the unit, and identifies which
CBA result file covers that line. The Subprogram ID corresponds to the left-hand number in the
Coverage Viewer and CBA Editor. The Line number corresponds to the right-hand number in the
Coverage Viewer and CBA Editor when the coverage type is Statement. The Line number represents
the branch or decision number when the coverage type is other than Statement. There may be more
than one CBA result covering a line. If the CBA result covers the True (T) outcome of a branch or
decision, "(T)" is displayed in the line column. Similarly, if it covers the False (F) outcome, "(F)" is
displayed.
The Covered By Analysis Result File section includes one table for each CBA result file. The table is
similar to the Metrics Report in that it shows the number of statements and/or branches covered by that
CBA result. For Statement and Branch coverage, only the number of lines or conditions that are
Covered by Analysis are shown.
For MC/DC or Level A coverage, the number of conditions covered by CBA are shown. However, for
the MCDC Pairs column, both CBA results and execution results are considered. An MC/DC Pair is
considered satisfied if at least one of the pair components (or row) is a CBA result. The remaining
component may be either a CBA result or an execution result.
In the example below, one pair is covered. One component is covered by the CBA result file Add_
Included_Dessert (row 1). The other component (row 3) is covered by an execution result.
Metrics Report
In the Metrics Report, when CBA results are present in the environment, they are displayed in italics in
the row below the subprogram and the coverage achieved by test execution is displayed below the CBA
results.
Steps to change the coverage view mode from Translation Unit (TU) to Source File (SFP):
3. In the Options sub-tab, select Source File in the Coverage perspective area.
5. Open the coverage viewer for a file by selecting the file, right-clicking, and choosing Open Coverage
Viewer.
> Right-click source and choose Add Coverage Analysis or click Toolbar button .
> User is prompted to choose a CBA Result or enter a new result.
>> Note: "Active CBA Result"
>>> The CBA tab must be current to add coverage analysis.
>>> The Active CBA result will store all coverage analysis.
>>> The Active CBA result can be switched.
> Coverage viewer opens with the CBA tab set current.
> Uncovered coverable items show empty checkboxes.
>> Note: Covered items cannot be marked as covered by analysis.
>> Check the box(es) to mark as covered by analysis.
>> Enter a justification in the Notes pane.
>> Click Save.
>>> Coverage now reflects the CBA:
l In the Environment view
l In the source view of the Source Coverage Viewer
l In the line details
l In the reports
>> Uncovered pair rows display an empty checkbox under the Covered column.
>> To add coverage analysis for a condition in an MC/DC expression:
>>> Select the condition in the Coverable section. The MC/DC table now
shows rows only for that condition.
>>> Check the empty checkbox(es) under the Covered column for the
appropriate rows.
>>> Enter a justification in the Notes.
>>> Save.
>>> Coverage now reflects the CBA
l The pair coverage for this condition will now show an "A" for
"Analysis".
Probe Points
VectorCAST Probe Points allow the user to insert user-defined blocks of code, or probe points, before
or after any executable statement. Probe points can be inserted during unit, API, or system testing and
are created and maintained on a per-unit basis. The probe points are maintained as the source code
changes, unless the coverage type changes or the source code changes such that the probe point no
longer references the same coverable line. In that case, the probe point is dropped and the user is
notified in the Message window.
Entry and Exit function probe points can be entered via the Probe Point Editor. Small black dots ● in the
left-hand column of the Coverage Viewer indicate a possible probe point location. To insert a probe
point, click on the black dot next to the executable line where you want to insert the probe point. The dot
will change to a green circle indicating an active probe point and a new node will be added to the
Probe Points editing pane on the right with text edit boxes activated under the node.
The Probe Point Editor allows you to enter the source code for the probe point. To insert and execute a
function entry probe point, click on the top text edit box to activate the editor widget and enter the code
to be inserted upon entering the function.
To insert and execute a function exit probe point, click on the bottom edit box to activate the editor
widget and enter the code to be inserted at all exit points of the function.
In the Probe Point Report, function probe points are indicated by listing (function) in the Line
column. Function entry probe points are listed under the Code Before column. Function exit probe
points are listed under the Code After column.
Set this option to True to force the insertion of function based probe points
before the declarative region of the function in C. The default value is
False.
The default value for this option is False, meaning insert the function probe point in the default location,
after the declarative section. The environment must be re-instrumented for a change to this option to
take effect.
In our example below, we insert a function probe point for the Place_Order function. By default, the
probe point is inserted after the declarative section as shown.
When the option is set to True, the function probe point is inserted before the declaration.
In environment scripts (.env, .enc) and probe point files (.pp), function entry probe points are
specified in the usual manner with the addition of the following line to identify the probe point as an
function entry probe point:
PROBE_LOCATION: FUNCTION
For example:
PROBE_LOCATION: FUNCTION
PROBE_FUNCTION: my_function
PROBE_CODE:
vcast_probe_print("*** BEGINNING OF FUNCTION ***\n");
END_PROBE_CODE:
In environment scripts (.env, .enc) and probe point files (.pp), function exit probe points are
specified in the same manner as a function entry probe point, with the addition of the following lines :
PROBE_CODE_AFTER
END_PROBE_CODE_AFTER
For example:
PROBE_LOCATION: FUNCTION
PROBE_FUNCTION: my_function
PROBE_CODE_AFTER:
vcast_probe_print("*** END OF FUNCTION ***\n");
END_PROBE_CODE_AFTER:
> When using a Branch or Statement+Branch coverage with a regular probe point on the function's
entry point (C++ only).
> When using a regular probe point on a declaration that is executable.
For C source code with the option VCAST_FUNC_PROBE_BEFORE_DECL set to False (default), the
order of execution is:
For C source code with the option VCAST_FUNC_PROBE_BEFORE_DECL set to True, the order of
execution is:
> Select an instrumented source file in the Source Files pane and click the Edit Probe Points
button on the Toolbar.
> Right-click the instrumented source file in the Source Files pane and select Edit Probe Points
from the context menu.
> In the Coverage Viewer, click the Edit Probe Points button located on the tab.
> In the Coverage Viewer, click on any Probe Point Dot. Any of the states (Empty, Enabled, or
Disabled) will open the Editor.
The Coverage Viewer opens in the MDI area in the left pane and displays the coverage and probe
points. The Probe Point Editor opens in the right pane.
Set this option to False to disable the use of probe points. The default
value is True.
Small black dots ● in the left-hand column of the Coverage View pane indicate a possible probe point
location. To insert a probe point, click on the black dot next to the executable line where you want to
insert the probe point. The dot will change to a green circle indicating an active probe point and a new
node will be added to the Probe Points editing pane on the right with text edit boxes activated under the
node.
Note: Probe points are not available for MC/DC sub conditions. Only coverable statements and
branches can be probed.
The Probe Point Editor allows you to enter the source code for the probe point. To insert and execute a
probe point before the line of code is executed, click on the top text edit box to activate the editor
widget. To insert and execute a probe point after the line of code is executed, click on the bottom text
edit box and activate the editor widget.
When Statement coverage is on, only the top text edit box is available for return() statements.
When Branch coverage is on, no probe point can be set at the entry point to a function.
To add a File Scope probe point, first instrument the unit or environment for coverage. In the example
below, the environment is instrumented for Statement coverage.
Open the Probe Point Editor by selecting an instrumented source file in the Source Files pane and
clicking the Edit Probe Points button on the Toolbar. Alternatively, you can right-click the UUT or
subprogram in the Test Case Tree and select Edit Probe Points from the context menu.
The File Scope Probe Point Editor is located in the top right pane of the Probe Point Editor. Click within
the editor to activate, and enter the File Scope probe point code.
Clicking on the probe point icon in the File Scope Probe Point Editor toggles the File Scope probe
point between the active and inactive states. To deactivate the File Scope probe point, single-click on
the active probe point icon . The inactive probe point icon is displayed in the File Scope Probe Point
Editor.
Click one of the Test Compile buttons to test the code. Clicking the Test Compile button will
test compile all active probe points (both File Scope and regular function scope). See "Test Compile a
When you save and apply the probe points for the unit, the File Scope probe point is inserted in the
instrumented source file just after the last #include line.
File Scope probe points are written to the regression scripts along with the regular function scope probe
points.
If you attempt to test compile a unit with a File Scope probe point in an old environment, you will receive
an error message notifying you to re-instrument the unit.
If the old environment contains regular probe points and you attempt to add a File Scope probe point, a
warning message appears when you attempt to test compile a unit. You have the option to continue the
test compile operation on the regular probe point or to abort the operation.
Select the Test compile anyway button to continue the operation. A compile error is generated if the
regular probe point depends on the File Scope probe point. Otherwise, it compiles without error. In this
case, a compile error is expected during the test compile because the File Scope probe point is not
included in the test compile. Or select the Cancel button to abort the operation.
A Test Compile button is located in the upper right of the Probe Point Editor and the File Scope
Probe Point Editor. This button performs a test compile of all active probe points in the unit. Note that
deactivated probe points are not compiled.
If you attempt to test compile a unit with a File Scope probe point in an old environment, you will receive
an error message notifying you to re-instrument the unit. See "Using File Scope Probe Points with Older
Environments" on page 266.
A test compile can be performed on a probe point in any status. It can be Not Saved, Not Applied, or
Applied. See "Probe Point Status Buttons" on page 270 for more information.
The test compile process preprocesses the unit, inserts the probe points into the preprocessed file and
compiles the file.
Click the Test Compile button to perform a test compile. Upon successful completion of the compile, a
confirmation dialog is displayed.
Error output from this action is displayed at the bottom of the Probe Point Editor. Clicking the File
button located at the top of the Test Compile Errors pane opens the preprocessed file, enabling you
to diagnose the compile error.
Edits and changes to probe points can be saved by doing any of the following:
> Clicking the Not Saved status button on the Probe Point Editor tab automatically saves and then
applies the probe point by performing an incremental rebuild.
> Clicking the Save button on the Toolbar (which saves the changes in the Probe Point Editor with
current focus).
> Clicking the Save All button on the Toolbar (which saves all modified probe points in all units).
Probe points can be applied to all units by selecting Environment => Reinstrument All Source.
Once applied, the probe point text appears in the instrumented version of the UUT, in the environment
directory.
Applies the probe points. Causes the unit(s) with the probe point to be
reinstrumented.
If Yes is selected, a confirmation dialog is shown for each modified unit, asking whether to save or not.
If No is selected, no changes are saved and the Probe Point Editor for the unit is closed.
If Yes is selected the changes are saved, the Probe Point Editor for the unit is closed, and the affected
units are re-instrumented.
If Cancel is selected, no changes are saved and the Probe Point Editor remains open.
A right-click context menu is provided in the Coverage View pane allowing the user to quickly Remove
All Probe Points, Expand All Subprograms, and Collapse All subprograms.
The keyboard shortcuts Select All (Ctrl+A) and Copy (Ctrl+C) are also available from the context menu
in the Coverage View pane.
A similar right-click context menu is provided in the Probe Points editing pane allowing the user to
Expand All nodes, Collapse All nodes or Remove All Probe Points.
To insert and execute a probe point before the line of code is executed, click on the top text edit box to
activate the editor widget. To insert and execute a probe point after the line of code is executed, click on
the bottom text edit box and activate the editor widget.
Using vcast_probe_print() rather than printf ensures that data will be captured even when
testing an embedded target.
Note that a warning is displayed whenever the printf command is entered in the editor. To turn off the
warning, select the checkbox. To reinstate the warning at any time, from the Menu bar, select View =>
Default Layout.
The vcast_probe_print()functions will auto-complete when you begin typing "vcast_" into the
Probe Point Editor. Select the appropriate function from the drop-down menu.
vcast_probe_print();
This function allows text output from the test to be captured to a file for inclusion
in the Execution Results Report. Only character strings with double quotes are
accepted.
vcast_probe_print_float();
This function allows floating point variable output from the test to be captured to a
file for inclusion in the Execution Results Report. Only floating point values are
accepted.
vcast_probe_print_int();
This function allows integer output from the test to be captured to a file for
inclusion in the Execution Results Report. Only integer values are accepted.
vcast_probe_print_unsigned();
This function allows unsigned int output from the test to be captured to a file for
inclusion in the Execution Results Report. Only unsigned int values are
accepted.
VCAST_DUMP_COVERAGE_DATA(void);
This function writes out the coverage data in the buffer to a file when the probe point
is executed. The default file is TESTINSS.DAT.
Note that if the option "Enable the coverage clear API" (VCAST_ENABLE_DATA_
CLEAR_API) is True, the user can also call VCAST_CLEAR_COVERAGE_DATA
() in a probe point, which clears the buffer when the probe point is executed. Also,
when the option "Dump buffered coverage data on exit" (VCAST_DUMP_
COVERAGE_AT_EXIT) is True, any data remaining in the buffer is written to the
TESTINSS.DAT file when the instrumented application finishes executing.
VCAST_CLEAR_COVERAGE_DATA(void);
This function clears collected coverage data for current execution when called.
Requires VCAST_ENABLE_DATA_CLEAR_API to be defined at compile time.
To deactivate all probe points in all units, select Deactivate All Probe Points from the drop-down
menu next to the Edit Probe Points button on the toolbar. A Confirmation dialog appears to confirm
that you wish to deactivate all probe points. Selecting the No button cancels the deactivation. Selecting
the Yes button deactivates all probe points and performs an incremental rebuild of the environment.
Deactivated probe points cannot be edited. A probe point must be in an active state to edit.
To activate an individual probe point, single-click on the inactive probe point icon . The active probe
point icon is displayed in both panes of the Editor.
To activate all probe points in all units, select Activate All Probe Points from the drop-down menu
next to the Edit Probe Points button on the toolbar. A Confirmation dialog appears to confirm that you
wish to activate all probe points. Selecting the No button cancels the activation. Selecting the Yes
button activates all probe points and performs an incremental rebuild.
To remove all active and inactive probe points from the unit, right-click within either the Coverage
Viewer or the Probe Point Editor and select Remove All Probe Points from the context menu. A
Confirm Remove dialog appears, and selecting the Yes button removes all probe points and displays
the available probe point icon ●.
Note: Selecting "Remove" probe points temporarily removes them from the Probe Point Editor.
"Removing" differs from "Deleting" in that it can be undone by closing the Editor and not saving.
Probe points are not permanently removed (or "Deleted") until the file is saved.
Note: Selecting the Delete All Probe Points option permanently removes all probe points in all
units in the environment, and cannot be undone.
Remove all active and inactive probe points from the environment and
incrementally rebuild.
To initialize Probe Point-only instrumentation, first select a source file in the Source Files pane. From
the Menu Bar, select Coverage => Initialize => Probe Point. Alternatively, right-click on the source
file and select Instrument => Probe Point from the context menu.
A Confirm Result Changes dialog may open, alerting the user to a change in instrumentation type for
the source file. Note that the coverage data in the result files will be removed. To retain any Notes and
Requirements, select the checkbox in the bottom pane. Select the Make Change button to confirm. A
process dialog will appear as the file is instrumented for Probe Points.
In the example below, notice that we can insert variable declarations as well as executable statements
like the call to vcast_probe_print.
Any text printed from a probe point is appended to the end of the Execution Data Report and can be
used as a debugging aid. To access the Execution Data Report after the test result has been added to
the Cover environment, choose the Execution Data Report option from the drop-down menu next to
the Probe Point button on the Toolbar.
In the example below, the routine will never return a negative value, but the return type is defined as
int, so the value can theoretically be either positive or negative. By inserting a probe point we can
cause the function to return the negative value -224 when it is called.
The probe point is saved and applied. Note that when the function is executed it returns a value of -224,
and the that the probe point is included in the results file.
In our code example below, note that there is no default case for the switch statement. If the value
returned from readA2D() is any value other than -1, 0 or 1, local is then returned as an uninitialized
variable. This would be a likely source of bugs in the software.
To fix this bug, we insert a probe point that initializes local to a known value of 10 and then verify that
the value 10 is returned when the param is out of bounds. When the test is executed, the results show
that the actual value for readA2D() is 40, which does not match the case list, so the default value of
10 is returned.
Now that the patch is verified, the changed code can be committed.
The Probe Point Listing lists all probe points for the environment. The report includes Saved, Applied,
Dropped, and Deactivated probe points. For each probe point, the associated ID, Unit, Function and
Line of source code are displayed. The Before and After context of the line of code is provided. The full
source code for each probe point is shown, including an indication of whether the probe point is inserted
before or after the source code line.
The Probe Point Output Report lists the Test Results, Addition Time and Output for the probe points.
The Probe Point API is used to export probe point data to third party tools. The functionality is
implemented using CLICAST commands to enable, disable or remove specific probe points.
the GUI or via CLICAST), and then saved and applied. Next, a regression script is generated by
selecting Environment => Create Regression Scripts... from the Menu Bar.
The probe point file can reside anywhere. In our example, it resides in our environment directory:
C:\VCAST\Examples\environments\qa_demo\cover_demo.
Once the regression script is generated, you will see two files in the selected directory:
> <env-name>.bat
> <env-name>.enc
The probe point data is contained within the <env-name>.enc file. An example .enc file is shown
below.
The user can modify the probe point data contained in the file.
Probe ID Number
Upon creation of a probe point in the GUI or via CLICAST, the probe point is assigned a serial number or
ID. The first probe point is assigned the number 1 by default, unless the probe point was added via
CLICAST using the text:
PROBE_ID: <ID>
positioned after the definition of the probe point in the .env script or probe point file.
Note: If two probe points are given the same ID, the second one encountered is given the next
available ID number that is unique.
The probe point file is user modifiable. For example, the probe point ID can be modified to reflect
existing requirement numbers and improve traceability.
When a probe point is deleted or dropped due to changes in the source code, its ID is also deleted and
can be re-used for another probe point.
Probe point IDs are displayed in the Probe Point Listing Report available from the GUI and in the probe
point XML report generated with CLICAST.
Navigate to the location of the .pp file and select the Open button. After importing, only the affected
source files are reinstrumented and the Probe Point Import Log is displayed.
Adds probe points specified in a .pp file. Requires 'clicast -e <env> cover
PRobe_point APPly' to be run in order to take effect.
Note: When any of the following commands are run, the action is merely scheduled to occur.
The action doesn't take place until the Apply command is run.
If the environment is opened in the GUI before Apply has been performed, it is recommended
that Apply be immediately performed by clicking the Not Applied button in the Probe Point
Editor.
The resulting probe_points.xml file produces a list of probe points which can be parsed and passed to a
third party tool managing the probe points.
Both the static analysis and run-time verification operate on software "components". A component is a
user-defined collection of source files in one or more directories. DO-178C defines a component as "A
self-contained part, combination of parts, subassemblies, or units that performs a distinct function of a
system." (DO-178C ANNEX B, Glossary)
The intent of Coupling Analysis is to prove that the control and data flow between architectural
components in the implementation match what was intended by the design, and to prove that these
flows have been tested. DO-178B requires applicants to identify couples in the design, and to verify that
those couples, and only those couples, exist in the implementation. DO-178C additionally requires
applicants to verify that the couples have been exercised during functional requirements testing.
VectorCAST's Component Report and Coupling Coverage Report provide this proof.
VectorCAST/Coupling will detect the read and write of data couples and capture calls to control
couples. For data couples, the goal is to capture the order of access to each couple. For control
couples, the goal is to ensure all control couples are tested by recording each time the control couple is
called.
Consider the following example code where subSystem1.cpp and subSystem2.cpp are single file
components. In a real application a component would be a collection of several, maybe hundreds, of
files. Even though this is a simple example, it has data and control couples in both directions.
Using VectorCAST/Coupling
VectorCAST/Coupling works with C and C++ languages. To use the VectorCAST/Coupling tool, a
VectorCAST Cover environment must first be created.
An illustration of the Coupling Verification process is provided below, and is discussed in the following
sections.
A component is generally a user-defined collection of source files and header files. Only source files
and header files can be added to a component, so the smallest component would be a single file. Both
the static analysis and the run-time verification operate on software components. The
VectorCAST/Coupling tool automatically creates a component definition file, components.xml,
which can be modified to match the definition of a component in your software architecture.
There is no specific guidance on how components should be defined, but it should be in line with your
software architecture. Items such as subsystems, configurable items (CSCI) or modules are all
possible choices for a component.
The components.xml file must be created before Coupling Analysis and Coupling coverage. If you
already have a components.xml file from a previous run, this step can be skipped.
Once the Cover environment is created, the components.xml file is generated from the command
line using the following command:
Run the coupling components creation script using the project source files.
Components are defined by the directories in which the Cover environment's source files reside.
The components.xml file is created and stored within the Cover environment's coupling sub-
directory.
Using the directory structure, VectorCAST builds the components.xml file using each directory
name as a component name and using the files within that directory as the elements of the component.
Coupling Analysis requires an existing components.xml file. By default, VectorCAST uses the
components.xml file located in the Cover environment's coupling sub-directory. The user can
provide an alternate file on the command line if desired.
Perform the coupling analysis using the specified components file, or, if no
If there is not an exact match of source files between the Cover environment and the
components.xml file, an error message is displayed. The user must correct the mis-match in order to
perform the Coupling Analysis successfully. In our example below, the Cover environment contains a
source file that is not in the components.xml file:
------------------------------------------------------------------
ERROR: The following source files are not in any component...
------------------------------------------------------------------
Missing-File:subsystem23.cpp
Two levels of analysis are performed. First, the symbols for each component are analyzed and a
Component Report is generated. Second, the symbol analysis is used to generate a Coupling Coverage
Report. Before proceeding to the next step, which is to apply Coupling to the source files, the reports
should be reviewed by the user to verify that they match the design. For more information on the two
reports, see "Coupling Reports" on page 293.
Add the function calls necessary to gather control coupling data during
execution.
The command builds an instrumentation script file containing the coupling instrumentation for each file,
as well as the coupling data file that must be linked into the instrumented application.
The underlying data structures and the implementation of the couple functions are generated in the file
couplingData.c. The instrumentation functions in this file use the same output mechanisms that
VectorCAST uses in other Cover environments to dump coverage data. If you are configured for real-
time coverage IO the ASCII data is generated as each instrumentation point is executed, otherwise the
ASCII data is generated at the end of the test using the vcastDumpCouplingData() API.
This function must be called when the coverage I/O method is set to buffered. You will either need to
add a call to your main program, or you will need to use the function atexit to register this function to
be called when the application terminates.
If the Cover environment is not using the Append Cover IO feature, then the following instructions
should be followed.
Using a GNU compiler, the compile and link command for our example is:
If this macro is not set (i.e. buffered I/O is being used), the coupling trace data is cached in RAM, and
can be retrieved by calling the function vcastDumpCouplingData(), or by using a debugger to
dump the C array: vcastCouplingData[].
If you are configured for real-time IO, the coupling trace data is exported and shows the order in which
the control couples and data couples are accessed.
If you are configured for buffered IO, you can cause the coverage data to be dumped by calling the
target-side function vcastDumpCouplingData().
If you have a debug connection to your target, you can use this connection to capture the raw data from
the global data object vcastCouplingData, which is defined in the instrumentation file
couplingData.c. In this case, you will need to convert the data from the debugger format to the:
"couple-data:0,1" format in order for VectorCAST to perform the import. This can be easily
accomplished with a script with logic similar to the functions vcastDumpCouplingData() and
vcastDumpOneCouplingCell(), which are also part of the couplingData.c file.
Each test case is run and the coupling trace data file (TESTINSS.DAT) is saved for each test case.
The coupling trace data is imported into the Cover environment using the following command:
To open the Coupling Report, select Environment => View => Coupling Coverage Report from the
Menu Bar, or enter the following command:
Generate report listing the data and control couples, and details on which
couples are covered by which test result.
The figure below shows a partial example of a Coupling Coverage Report. For more information on the
full report contents, see "Coupling Coverage Report" on page 295.
Re-analyze Coupling
Coupling information can be re-analyzed when changes occur to the source file. To start a new
Coupling Analysis, select Tools => Coupling => Analyze from the Menu Bar.
Perform the coupling analysis using the specified components file, or, if no
file is specified, use the file in the environment directory.
VCAST_IGNORE_ERROR_WHEN_READ_FOLLOWS_ADDRESS_OF
When this option is enabled, VectorCAST ignores data couple access errors for a read following an
address of operation. For full details on the use of this option, see "Ignore Access Error When Read
Follows Address of" on page 202.
VCAST_TREAT_DC_INITIALIZATIONS_AS_WRITE
When this option is enabled, global data initialization expressions are treated as a write. For full details
on the use of this option, see "Treat Initialization of Data Couple as a Write" on page 201.
Coupling Reports
VectorCAST's set of Coupling Reports provide proof that the control and data flow between
architectural components in the implementation match what was intended by the design, and prove that
these flows have been tested.
VectorCAST/Coupling provides the following reports, which are discussed in detail below:
> Component Report - lists the global data objects and functions for each component.
> Coupling Coverage Report - shows the static analysis and runtime analysis. This report is a high-
level overview of the Coupling coverage in the application.
> Coupling Access Order Full Report - shows the access order by test result for all couples. This
verifies that the architecture was correctly followed for all of the tests.
> Coupling Access Order Error Report - shows the access order by test result only for couples with
errors. This report helps the user to efficiently track down any access errors without having to
parse through all of the test results.
Coupling reports may display the following control couple access types:
Coupling reports also may display the following data couple access types:
Component Report
The Component Report is a text-only list of all the "needed" and "defined" global data objects and
functions for each component. "Needed" means that the object or function is used by the current
component. "Defined" means that the object or function is available to an external component. The
report is a first step in understanding the inter-dependencies for data and control between any two
components.
To generate the Component Report, the components.xml file must already be created.
The Component Report contains four sections: Globals Defined, Globals Needed, Functions Defined,
and Functions Needed for each component in the report.
In the Globals Needed and Functions Needed sections, the filename and line number of the reference is
provided. For Globals Needed, the Report also shows whether the reference requires read, write, or
read/write access.
Note that when compilers mangle function names, you will see the mangled names displayed for the
functions. Tracking mangled names is the only way to ensure that analysis is sound across multiple
names.
l Configuration Data - Contains environment name as well as the date and time of the report
creation
l Summary - Shows metrics of data couples and control couples covered, and the number of data
couple access errors
l Data Couples Summary- A list of the Data Members with its Component, References and
Couples metrics
l Data Couples Detail by Component - A detailed view of each Data Member's components
l Data Couples Detail by Reference - A detailed view of each Data Member's components and
where it was read and written to.
l Control Couples Summary - A list of function names along with Components, References and
Couples metrics
l Control Couples Detail by Component - A detailed list of the Function Component name as well as
Components and References metrics
l Control Couples Detail by Reference - A detailed view of each Function name and Component
and location of the Access
l Data Couple Access Errors - A list of data couples where a read access occurred before a write
access
l Function Pointer Analysis - Information on function pointer calls that require user analysis
Note: VectorCAST allows the Coupling Coverage Report to be generated after Coupling
Analysis, even if instrumentation has not been applied.
To open the report, select Environment => View => Coupling Coverage Report from the Menu Bar,
or enter the following command:
Generate report listing all the data/control couples and their references.
To open the report, select Tools => Coupling => Access Order Report => View Full Report from
the Menu Bar, or enter the following command:
Generate the coupling access order full report. This shows the access order
per test result for each data and control couple.
The Coupling Access Order Full Report consists of the following sections:
> Configuration Data - Contains environment name as well as the date and time of the report
creation
> Summary - Shows metrics for test results with access errors and for data couple access errors
> Data Couple Access Errors - Shows details of access errors for data couples
> Control Couples per Test Result - Shows the access order for each control couple. References
with an error are marked.
> Data Couples per Test Result - Shows the access order for each data couple. References with an
error are marked.
A partial example of the Coupling Access Order Full Report is shown below:
To open the report, select Tools => Coupling => Access Order Report => View Error Report from
the Menu Bar, or enter the following command:
Generate the coupling access order errors report. This shows the access order
per test result for only data and control couples with errors.
The Coupling Access Order Error Report consists of the following sections:
> Configuration Data - Contains environment name as well as the date and time of the report
creation
> Summary - Shows metrics for test results with access errors and for data couple access errors
> Data Couple Access Errors - Shows details of access errors for data couples
> Incorrect Control Couples and Data Couples - Shows details of access errors of control and data
couples.
A partial example of the Coupling Access Order Error Report is shown below:
An entry is also added to the Data Couple Access Errors section of the report:
Note that for compound types, there can be an access error when a sub-component of that compound
type (a field or array element) is written and a different sub-component is read. However, if an aggregate
assignment is made to the data couple, then all read accesses of a sub-component are valid.
ERROR: Read before Write Description: A variable was read before ever being written.
Why it is relevant: Catches invalid/undefined access of
variables.
ANALYZE: Read follows Description: A variable was initialized and then read, without
Initialization an explicit write access in between.
Why it is relevant: Catches accesses to variables that may
have invalid/incomplete values. It is common to initialize
variables to a default value, like 0, but in many cases these
default values may not be valid when the data is later
accessed.
Note: In some code bases, variables may always be
initialized to valid values. In this case, flagging these
ANALYZE: Read follows Address Description: A possible Read before Write access error
of occurred; a variable's address was taken and then the
variable was read, without an explicit write access in
between. Once a variable's address is taken, VectorCAST
cannot guarantee all further accesses will be reported. The
user must determine if the variable was properly set before
the read.
Why it is relevant: Catches invalid/undefined access of
variable data.
Function Pointers
Function pointer calls are not tracked for coverage by VectorCAST automatically. To start tracking the
function pointer calls for coverage, manual analysis of the function pointer calls must be done.
In the example the functions doSomeS1Stuff() and doSomeOtherS1Stuff() are both control
couples between components S1 and S2, and these functions are called via function pointers at lines
196 and 197. The VectorCAST Coupling Coverage report for this example looks like this:
Function pointer calls are listed within the “Function Pointer Analysis” section of the report. Function
pointer calls listed in this section are “un-analyzed”, which indicates that VectorCAST is not tracking
the calls for coverage until manual analysis is complete.
During execution, VectorCAST monitors the control couple established via the function pointer. If the
function pointer calls the control couple, the function pointer call reference within the control couple is
marked as covered. If the function pointer does not call the control couple and calls some other
function, the function pointer call reference within the control couple is not marked as covered.
As an example, consider the following function, which makes two calls via function pointer:
During the normal analysis and instrumentation process for this example, these two locations are
identified and reported in the Function Pointer section of the Coupling Coverage Report (see above
examples).
The next step is to tell the tool which functions can be called by these function pointers. Do this by
providing information to VectorCAST via the following steps:
2. Edit the user-analysis.txt file to update the status for each FPTR line, using one of the
following:
> Ignore - Use if you don't want the FPTR to be tracked. For example, if the function pointer only
calls functions from within the same file.
> Active - Use if you want the FPTR to be tracked. Note that you must provide a list of space-
separated function names for the target of this function pointer. If the function is overloaded, the
mangled name must be used. A complete list of valid function names is created in the file:
master-function-list.xml, located in the coupling sub-directory of the VectorCAST
environment.
4. Once the manual analysis is provided, use the following command to recompute and apply the
coupling instrumentation logic and then re-run your tests.
The Function Pointer Analysis section of the Coupling Coverage Report will now look like this:
The Coupling Coverage Details by Test Result section of the Coupling Access Order Full Report will
now look like this:
Note that the fptr-call on line 196 is marked as covered. As you recall, we modified the analysis file
in our example to set the status as "Active", and the function pointer called the target during execution.
Like a compiler, PC-lint Plus parses your source code files, performs semantic analysis, and builds an
abstract syntax tree to represent your program. From there, PC-lint Plus employs various mechanisms
including Data Flow Analysis, Abstract Interpretation, Value Tracking, read-write analysis, Strong Type
checking, function semantic validation, and many other technologies to provide a robust and holistic
analysis of both individual files and an entire project.
Detailed instructions on how to configure and use PC-lint Plus with VectorCAST are provided in the
VectorCAST PC-lint Plus Integration Application Note AN-ACT-1-010.
The following sections discuss how to display the CodeSonar analysis tool from directly within
VectorCAST.
Configuring CodeSonar
To configure options for CodeSonar, from the Menu Bar, select Static Analysis => Edit Analysis
Tools.... The User-Configured Analysis Tools window opens.
VectorCAST provides a template for configuring the CodeSonar tool. The template provides the paths
to the CodeSonar icon and the CodeSonar.py script which are included in the VectorCAST
distribution. The user provides the unique name and associated arguments.
Short
Long Name Required Flag Help
Name
Short
Long Name Required Flag Help
Name
Select the Add button to add the CodeSonar configuration to the list.
Select the Menu check box and select the Apply button to display the name and associated icon for the
tool in the Menu Bar. Select the Tool Bar checkbox and select the Apply button to display the name and
associated icon for the tool in the Toolbar.
Select the OK button to complete the configuration and close the User-Configured Analysis Tools
window.
To run CodeSonar, select Static Analysis => <Tool Name> => Analyze from the Menu Bar.
Alternatively, from the Toolbar, either select the CodeSonar button from the Toolbar or select
Analyze from the CodeSonar drop down menu.
Analysis can also be run by right-clicking on a source file in the Project Tree and selecting Analyze
Source => <Tool Name> from the context menu.
Selecting the Analyze option calls the command to the CodeSonar executable and runs the analysis on
the CoderSonar server. Upon completion, the results are returned to VectorCAST and the Analysis
Results window opens displaying the results.
Alternatively, from the Toolbar, select the CodeSonar button from the Toolbar and select View
Analysis from the CodeSonar drop down menu.
Analysis Results can also be viewed by right-clicking on a file in the Project Tree and selecting View
Analysis => <Tool Name> from the context menu.
In the example below, the results generated with our custom CodeSonar tool for the file lapi.c are
displayed in the Analysis Results window.
Hovering over the error description provides a tool tip with detailed information on the error. Click on the
error to view the analysis details in the right pane. The source code automatically jumps to the location
of the error in the code.
A hyperlink is provided in the analysis details pane which opens CodeSonar at the location of the error
analysis, allowing you to view lower level detail of the error.
To configure options for a Generic Analysis Tool, from the Menu Bar select Static Analysis => Edit
Analysis Tools.... The User-Configured Analysis Tools window opens.
Any currently configured Analysis tools are listed in the top pane. In our example above, the "Example
Analysis" tool has been pre-configured for us and is listed in the top pane. Select the Menu checkbox
and select the Apply button to display the name and associated icon for the tool in the Menu Bar.
Select the Tool Bar checkbox and select the Apply button to display the name and associated icon for
the tool in the Toolbar.
To configure a new Analysis Tool, enter the following information in the Attributes pane. To easily input
long data entries, use the Expanded Text Editor. The Expanded Text Editor is opened by clicking on the
button displayed in the active text entry field.
The custom script processes the list of files to be analyzed and processes the results from the analysis
tool into an xml format which VectorCAST can read.
The generic-analysis_<tool-name>.xml file is passed into the executable script and is parsed
to create a file to be passed to the Analysis Tool. In our example, the information from the generic-
analysis_Example_Analysis.xml file is used to create a .lnt file, vcast_Example_
Analysis_filelist.lnt.
Note that the output of the Analysis Tool is processed into a format which VectorCAST can read. In our
The output name of your analysis script should adhere to the name attribute of the generic-analysis
node, as per the input XML file. That is, the name should match vcast__<attribute-
value>.xml, where the attribute value comes from the input XML file.
<issue>
<file line='31' column='0'>C:\64t\tutorial\c\manager.c</file>
<message id='714'>Symbol 'Place_Order(unsigned short, unsigned short, struct
order_type)' (line 31, file C:\64t\tutorial\c\manager.c) not
referenced</message>
</issue>
.
.
.
Alternatively, from the Toolbar, either select the Generic Analysis button from the Toolbar or select
Analyze from the Generic Analysis drop down menu.
Analysis can also be run by right-clicking on a source file in the Project Tree and selecting Analyze
Source => <Tool Name> from the context menu.
Selecting the Analyze option calls the command to the Analysis Tool executable. The Analysis Tool
extracts the "name" attribute from the generic-analysis_<tool-name>.xml file and generates
the output file (vcast_<tool-name>.xml).
Alternatively, from the Toolbar, select the Generic Analysis button from the Toolbar and select
View Analysis from the Generic Analysis drop down menu.
Analysis Results can also be viewed by right-clicking on a file in the Project Tree and selecting View
Analysis => <Tool Name> from the context menu.
VectorCAST uses the results file, vcast_<tool-name>.xml, located in the environment directory,
to display the Analysis Results. In the example below, the results generated with our custom Example
Analysis tool for the file manager.c are displayed in the Analysis Results window.
The created docs directory should exist in the same directory as the analysis script you wish to
run (e.g., if your script is /path/to/script.py, then you should create /path/to/docs).
$VECTORCAST_DIR/StaticAnalysisTools/docs
2. In the newly-created docs directory, create a sub-directory called html or text (both may be
added).
For the distributed Generic Analysis example, these directories would be:
$VECTORCAST_DIR/StaticAnalysisTools/docs/html
$VECTORCAST_DIR/StaticAnalysisTools/docs/text
3. For each Issue ID, create html and/or txt files for the issue.
To create a custom message for Issue 736, create one or both of these files:
$VECTORCAST_DIR/StaticAnalysisTools/docs/html/736.html
$VECTORCAST_DIR/StaticAnalysisTools/docs/text/736.txt
Note: VectorCAST always attempts to use the html directory first. If the html directory is not
found, VectorCAST looks for the text directory. If neither directory is found, VectorCAST uses
its default msg.txt file.
Note: You must have a graphics card that supports CUDA. For more information on the CUDA
standard, see www.nvidia.com/cuda.
The source file is then processed multiple times: once for each device (GPU) architecture, and once for
the host (CPU) architecture. These code variants are passed to the underlying preprocessor/compiler
with additional compilation flags to specify which variant is being compiled. In particular, nvcc sets the
__CUDA_ARCH__ macro to force processing for a particular device (the absence of this macro
indicates host processing). The resulting object files are then linked together into a "fat binary" that
contains object code for the host processor as well as multiple GPU architectures.
This process is typically handled by nvcc automatically, with little or no need for the developer to
understand the mechanism. However, because each variant may contain different code (and thus
different code coverage), VectorCAST must treat each architecture variant as an individual source file
with an independent coverage set. This is accomplished by a set of Python scripts that are used to
prepare (split) CUDA source for processing by VectorCAST. Prepared code variants are added to a
VectorCAST project with unit options that mirror nvcc’s internal pre-processing logic. This ensures that
VectorCAST’s instrumentation matches the pre-processing mechanism employed by nvcc.
1. Copy the original source code once for each architecture variant.
- There will be one variant for each GPU device architecture and one for the CPU host.
- Variants may be specified manually as script arguments, or can be derived automatically from
compiler build options (e.g. -gencode flags).
2. Replace the original source file with an aggregator that uses #ifdef and #include logic to
selectively include each variant copy during compilation.
- This aggregator is designed such that pre-processing the aggregator produces equivalent source
code to the original.
- The original source is backed up prior to replacement.
Once this preparation is performed, each variant copy is added to a VectorCAST project with specific
unit options. These options direct VectorCAST and the underlying nvcc pre-processor to process the
variant for just a single architecture.
The overall effect is that the VectorCAST project will contain multiple variant copies of the original
source, and each variant is processed for just one host/device architecture. This results in a single
VectorCAST project that can track and report coverage from all architectures simultaneously.
Building
After CUDA source has been prepared with the VectorCAST CUDA processing scripts, the aggregator
file may be built using the original build system. The aggregator file resides at the original source
location; no changes to the build system are typically necessary. Note, however, that if you do not elect
to append the c_cover_io.c source to one of your host-side source files (either a host variant of a
CUDA source file, or a C or C++ file) then the build system will need to be updated to compile and link
that additional source.
Partial Instrumentation
The aggregator unit is configured such that it may be built at any time, regardless of whether the source
code variants have been instrumented or not.
Architecture variants may be independently instrumented without interfering with one another. If one or
more variant copies have been instrumented, that instrumentation will be included in the object code for
the associated architecture.
For example, if source.cu is built for CUDA architectures 3.5, 5.3 and 7.0 (as well as the CPU host),
any combination of the four variant versions of the source may be instrumented without issue. During
runtime, you will only receive code coverage data for execution on instrumented architectures.
This example and its associated file (KB_1476_cuda-demo.zip) are located online in the Vector
KnowledgeBase article VectorCAST CUDA Example .
1. Review and update setup_env.sh. This script contains environment variables and settings that
configure the example for your specific environment.
Verify that VECTORCAST_DIR and VECTOR_LICENSE_FILE are set appropriately for your
system to access VectorCAST tools, and the CUDA_PATH variable is set to the base of your
CUDA tools installation.
The example is configured to produce code for GPUs with compute capabilities for CUDA
architectures 3.5, 5.3, and 7.0. If your system's GPU uses a different architecture, then adjust the
SMS variable appropriately.
If you are executing on a remote ARM target (for example, using a Jetson device), then set
TARGET_ARCH appropriately.
2. Execute the run_example.sh script to build and run the example. The script is set to provide
detailed logging, and stop on any failures. If there are any issues, please review the execution log
and script to determine the source of the failure.
1. Create a directory named example in the current working directory. All subsequent files will be
created within example. (To clean the example, simply remove the example folder.)
2. Execute (source) setup_env.sh to import environment settings.
3. Create a CCAST_.CFG settings file.
4. Prepare build directory with source files.
5. Execute the build with the vcshell utility to capture build commands.
6. Locate and prepare CUDA source files using the CUDA workflow scripts (located in
$VECTORCAST_DIR/DATA/cuda).
7. Create a coverage project for the prepared source variants.
8. Apply per-file unit options, coverage I/O library appending, and coverage capture probe points.
9. Instrument included source files.
10. Build the instrumented binary.
11. Execute the instrumented image.
12. Retrieve and import coverage results.
13. Produce a coverage report.
After the script completes, you may open the created coverage project (*.vcp) to inspect results,
review the configuration, etc.
Jenkins is an extensible, open source automation server for continuous integration and continuous
delivery for any application. Two plugins are available from the Jenkins website for use with
VectorCAST projects: VectorCAST Execution and VectorCAST Coverage.
The VectorCAST Execution plugin allows the user to create, delete and update jobs to build and run
VectorCAST projects. Jobs can be created as a single job or split into multiple jobs for a VectorCAST
project, with one job for each environment and an overall job to combine the results. The plugin adds a
new top-level menu item to the Jenkins sidebar that provides job control for VectorCAST projects. To
learn more about the Execution plugin, visit the Jenkins website at
https://plugins.jenkins.io/vectorcast-execution.
The VectorCAST Coverage plugin allows the user to capture code coverage reports from VectorCAST
projects. The VectorCAST Coverage plugin is added as a dependency to the VectorCAST Execution
plugin and is automatically used to display coverage data. Jenkins automatically generates the trend
report of coverage and displays a coverage trend graph at the top page of a job. To learn more about the
Coverage plugin, visit the Jenkins website at https://plugins.jenkins.io/vectorcast-coverage.
Index move 91
path 61, 70
*
remove 61, 70
indicating a statement covered 125
rename 60, 69
.enc 83
working with 91
ADA_EXTENSIONS 227
basis path
adding multiple test results 117
viewing for subprogram 136
aggregate coverage report
basis path analysis 137
instantiations 203
basis path output
allowlist directory 62, 71
building test cases from 137
animated control flow 149
basis paths coverage 101
animation coverage type 188
Branch coverage 101
animation play speed 192
breakpoints
apply coverage to source tree 41
removing 152
benefits 41
setting in coverage animation 152
changing option 43
buffered coverage I/O 188
how it works 41
building environment, in wizard 65, 74
strategies 41
C compile command 182
apply instrumentation to source tree
C file extensions 176
always option 42
C parser flags 182
during build option 42
C preprocessor command 181
never option 43
C standard library
assembler command 184
link error using 165
ASSEMBLER_CMD 184
C_ALT_COMPILE_CMD 182
assembly file extensions 176
C_ALT_EDG_FLAGS 182
auto naming multiple test results 117
C_ALT_PREPROCESS_CMD 181
AUTO_NAME_RESULTS 225
C_COMPILE_CMD 162
background color for failing cells 236
C_COMPILE_CMD_FLAG 182
background color for partially passing cells 236
C_COMPILE_EXCLUDE_FLAGS 183
background color for passing cells 235
C_COMPILER_CFG_SOURCE 184
background colors
C_COMPILER_FAMILY_NAME 184
changing for coverage output 223
C_COMPILER_OUTPUT_FLAG 162
base directory
C_COMPILER_PY_ARGS 183
add 59, 68
C_COMPILER_TAG 160
allowlist 62, 71
C_DEBUG_CMD 173
include headers 63, 72
C_DEBUG_HELP_FILE 183
missing 91
332
Index: C_DEFINE_LIST – CLICAST
333
Index: closing environment – CLI
334
Index: closing environment – CLI
VCAST_DUMP_BINARY_ VCAST_KEEP_INSTRUMENTATION_
COVERAGE 199 INTERMEDIATE_FILES 212
VCAST_DUMP_COVERAGE_AT_ VCAST_MAX_CAPTURED_ASCII_
EXIT 190 DATA 189
VCAST_EMPTY_STATEMENTS_ VCAST_MAX_COVERED_
COVERABLE 220 SUBPROGRAMS 191
335
Index: closing environment – CLI
VCAST_POST_PREPROCESS_ VCAST_SIMPLIFIED_CONDITION_
COMMAND 180 COVERAGE 207
VCAST_PREPROCESS_ VCAST_SORT_METRICS_RPT_BY_
PREINCLUDE 179 DIR 228-229
VCAST_RPTS_COVERAGE_RESULT_ VCAST_UNCOVERED_LINE_
COLUMN_WIDTH 239 INDICATOR 192
336
Index: closing environment – coverage
choosing between C and C++ 160 coupling access order error report 299
compiler, setting in wizard 57, 66 coupling access order full report 298
configure binary coverage 198 coupling access order full report 298
access order error report log 304 changing the type of 107
access order full report log 304 compiling instrumented source files 111
337
Index: coverage analysis editor – customizing Coverage Viewer
338
Index: debug help file – filter Metrics report
339
Index: fonts – migrated cover environment
340
Index: missing base directory – options
341
Index: original source code line numbers – PRECOMPILE_EXT
342
Index: preprocess file – report unit column width
343
Index: reports – template instantiations
344
Index: Test button – VCAST_COVERAGE_IO_REAL_TIME
customizing c_cover_io for your target 113 undefined symbol VCAST_exit 165
MC/DC VCAST_COVER_SEPARATE_TYPES_
FILES 226
View Equivalence Matrices 139
VCAST_COVERAGE_FOR_
treat c++ catch blocks as branches 199 DECLARATIONS 200
troubleshooting VCAST_COVERAGE_IO_ANIMATION 188
error during uninstrumenting 109 VCAST_COVERAGE_IO_BUFFERED 188
exit is undefined 165 VCAST_COVERAGE_IO_REAL_TIME 188
345
Index: VCAST_CPP_FILE_EXTENSIONS – work flow
346
Index: working directory – writable checkbox
instrument coverage 22
instrument for coverage 22
launch vectorcast 16
open environment 22
preliminary setup 15
run a single test interactively 30
set coverage type 22
using the lua example 14
view coverge results 32
working directory 85
invalid 55
remembering last 55
setting 55
uses of 55
when starting from command line 55
when starting from Start menu 55
wrap text in Notes 239
writable checkbox
viewing source code 90
347