|
| 1 | +-- This Source Code Form is subject to the terms of the Mozilla Public |
| 2 | +-- License, v. 2.0. If a copy of the MPL was not distributed with this file, |
| 3 | +-- You can obtain one at http://mozilla.org/MPL/2.0/. |
| 4 | +-- |
| 5 | +-- Copyright (c) 2015, Lars Asplund lars.anders.asplund@gmail.com |
| 6 | + |
| 7 | +------------------------------------------------------------------------------- |
| 8 | +-- This is an example testbench using UVVM together with VUnit runner, check, |
| 9 | +-- and log functionality. It is written with UVVM users in mind so the comments are |
| 10 | +-- focused on describing VUnit behaviour. However, the information is kept |
| 11 | +-- rather short, for more information you should read the user guides, primarily |
| 12 | +-- |
| 13 | +-- * user_guide.md under the VUnit root |
| 14 | +-- * <VUnit root>/vhdl/check/user_guide.md |
| 15 | +-- * <VUnit root>/vhdl/logging/user_guide.md |
| 16 | +-- |
| 17 | +-- The user guides are Markdown documents. If you don't have a Markdown viewer |
| 18 | +-- you can read the rendered versions on https://github.com/LarsAsplund/vunit |
| 19 | +-- |
| 20 | +-- For simplicity there is no DUT in this testbench, focus is on describing UVVM |
| 21 | +-- and VUnit integration. |
| 22 | +-- |
| 23 | +-- The testbench can be run directly with your simulator like a "traditional" |
| 24 | +-- UVVM testbench but the preferred way is to run with the VUnit run script, |
| 25 | +-- run_uvvm.py, in the parent directory of this file (<root>). There are some |
| 26 | +-- differences in behavior when running in the different modes. This is |
| 27 | +-- described at the end of this file (see Running w/wo Script). |
| 28 | +-- |
| 29 | +-- In this testbench UVVM logging/reporting calls like |
| 30 | +-- |
| 31 | +-- log("\nChecking Register defaults"); -- progress report |
| 32 | +-- report_alert_counters(FINAL); -- final summary report |
| 33 | +-- |
| 34 | +-- have been excluded. The reason is that run_uvvm.py will handle reporting for you. |
| 35 | +-- More information about what's reported by run_uvvm.py and what you can add to the |
| 36 | +-- VHDL code if you want to is described at the end of this file (see VHDL and |
| 37 | +-- Python Reporting). |
| 38 | +-- |
| 39 | +-- When using VUnit with UVVM there is a risk of name collisions when using the |
| 40 | +-- warning, error, and failure procedures. These procedures are not used in this |
| 41 | +-- testbench but a solution to handle this potential problem is given at the |
| 42 | +-- end of this file (Handling Name Collisions) |
| 43 | +------------------------------------------------------------------------------- |
| 44 | + |
| 45 | +library vunit_lib; |
| 46 | +context vunit_lib.vunit_context; |
| 47 | + |
| 48 | +library ieee; |
| 49 | +use ieee.std_logic_1164.all; |
| 50 | +use ieee.numeric_std.all; |
| 51 | + |
| 52 | +library uvvm_util; |
| 53 | +context uvvm_util.uvvm_util_context; |
| 54 | + |
| 55 | +entity tb_uvvm_integration is |
| 56 | + generic ( |
| 57 | + -- This generic is used to configure the testbench from run_uvvm.py, e.g. what |
| 58 | + -- test case to run. The default value is used when not running from script |
| 59 | + -- and in that case all test cases are run. |
| 60 | + runner_cfg : runner_cfg_t := runner_cfg_default); |
| 61 | +end entity tb_uvvm_integration; |
| 62 | + |
| 63 | +architecture test_fixture of tb_uvvm_integration is |
| 64 | + signal output_data : unsigned(9 downto 0) := to_unsigned(144,10); |
| 65 | + signal grant : std_logic_vector(1 downto 0) := "10"; |
| 66 | +begin |
| 67 | + test_runner: process is |
| 68 | + variable expected_data : integer; |
| 69 | + begin |
| 70 | + -- Setup the VUnit runner with the input configuration. |
| 71 | + test_runner_setup(runner, runner_cfg); |
| 72 | + |
| 73 | + -- To avoid that log files from different test cases (run in separate |
| 74 | + -- simulations) overwrite each other run_uvvm.py provides separate test case |
| 75 | + -- directories through the runner_cfg generic (<root>/vunit_out/tests/<test case |
| 76 | + -- name>). When not using run_uvvm.py the default path is the current directory |
| 77 | + -- (<root>/vunit_out/<simulator>). These directories are used by VUnit |
| 78 | + -- itself and these lines make sure that UVVM do to. |
| 79 | + set_log_file_name(join(output_path(runner_cfg), "_Log.txt")); |
| 80 | + set_alert_file_name(join(output_path(runner_cfg), "_Alert.txt")); |
| 81 | + |
| 82 | + -- The default behavior for VUnit is to stop the simulation on a failing |
| 83 | + -- check when running from script but keep on running when running without |
| 84 | + -- script. The rationale for this and how you can change that behavior is |
| 85 | + -- described at the bottom of this file (see Stopping the Simulation on |
| 86 | + -- Failing Checks). The following if statement causes UVVM checks to behave |
| 87 | + -- in the same way. |
| 88 | + if not active_python_runner(runner_cfg) then |
| 89 | + set_alert_stop_limit(ERROR, 0); |
| 90 | + end if; |
| 91 | + |
| 92 | + -- The VUnit runner loops over the enabled test cases in the test suite. |
| 93 | + -- When using run_uvvm.py only one test case is enabled in each simulation. |
| 94 | + while test_suite loop |
| 95 | + -- Each test case is defined by a branch in the if statement. This test |
| 96 | + -- suite has four test cases, two using VUnit checking and two using UVVM |
| 97 | + -- checking. |
| 98 | + if run("Test data path with VUnit") then |
| 99 | + expected_data := 13 ** 2; |
| 100 | + -- check_equal is one of VUnit's around 15 check types. It checks the equality |
| 101 | + -- between two values and is similar to UVVM's check_value. The |
| 102 | + -- difference here is that check_equal handles equality between different and |
| 103 | + -- commonly compared types. With preprocessor support VUnit can also |
| 104 | + -- handle all types of relations between values of any type. See check_relation |
| 105 | + -- in the check user guide for more info. |
| 106 | + check_equal(output_data, expected_data, "Data path error."); |
| 107 | + elsif run("Test bus status with VUnit") then |
| 108 | + -- check is the VUnit integrated version of the assert statement in |
| 109 | + -- VHDL. It simply checks if a boolean expression is true. |
| 110 | + check(grant /= "11" , "Must not grant simultaneous access."); |
| 111 | + elsif run("Test data path with UVVM") then |
| 112 | + expected_data := 13 ** 2; |
| 113 | + check_value(output_data, to_unsigned(expected_data, output_data'length), ERROR, "Data path error."); |
| 114 | + elsif run("Test bus status with UVVM") then |
| 115 | + check_value(grant /= "11" , ERROR, "Must not grant simultaneous access."); |
| 116 | + end if; |
| 117 | + end loop; |
| 118 | + |
| 119 | + -- Cleanup VUnit. The UVVM error status is imported into VUnit at this |
| 120 | + -- point. This is neccessary when the UVVM alert stop limit is set such that |
| 121 | + -- UVVM doesn't stop on the first error. In that case VUnit has no way of |
| 122 | + -- knowing the error status unless you tell it. |
| 123 | + test_runner_cleanup(runner, get_alert_counter(ERROR) > 0); |
| 124 | + wait; |
| 125 | + end process test_runner; |
| 126 | +end; |
| 127 | + |
| 128 | +-- Running w/wo Script |
| 129 | +-- =================== |
| 130 | +-- |
| 131 | +-- The default behaviour when running your testbench with the run_uvvm.py script |
| 132 | +-- is to run each test case in a separate simulation which means that the test |
| 133 | +-- case is free from interference from other test cases. So when a test case |
| 134 | +-- fails you know that the root cause is within that test case and not a side |
| 135 | +-- effect of previous test cases. Also, with independent test cases you can run |
| 136 | +-- selected test cases in a test suite (see python run_uvvm.py -h), test cases can |
| 137 | +-- be run in parallel on many cores to reduce test time (see -p option), and |
| 138 | +-- the risk of having to change many test cases just because you wanted to make |
| 139 | +-- changes to one is reduced. |
| 140 | +-- There's a small sub-second overhead associated with each test case when doing |
| 141 | +-- this. If this is important to you there is an option to override this with |
| 142 | +-- the run_all_in_same_sim pragma. See the last line of |
| 143 | +-- vhdl/com/test/tb_com.vhd for an example. |
| 144 | + |
| 145 | + |
| 146 | +-- Handling Name Collisions |
| 147 | +-- ======================== |
| 148 | +-- |
| 149 | +-- Calling any of the three procedures warning, error, and failure with a |
| 150 | +-- single string parameter causes compiler problems since definitions are |
| 151 | +-- available from both VUnit and UVVM. VUnit provides these as convenience |
| 152 | +-- procedures for creating log entries at the given severity level. Writing |
| 153 | +-- |
| 154 | +-- error("Something is wrong"); |
| 155 | +-- |
| 156 | +-- is equivalent to |
| 157 | +-- |
| 158 | +-- log("Something is wrong", error); |
| 159 | +-- |
| 160 | +-- Although a failing check can generate such a message the error call isn't |
| 161 | +-- the same thing. It's just an error message and doesn't affect error |
| 162 | +-- statistics, it doesn't go into the error log that checks may use and so on. |
| 163 | +-- If that's what you want you should use the unconditional check |
| 164 | +-- |
| 165 | +-- check_failed("Something is wrong"); |
| 166 | +-- |
| 167 | +-- Because of this the VUnit warning, error, and failure procedures aren't |
| 168 | +-- normally found in a testbench. |
| 169 | +-- |
| 170 | +-- UVVM handles this differently, the error call above is a convinience procedure |
| 171 | +-- for UVVM's unconditional check |
| 172 | +-- |
| 173 | +-- alert(ERROR, "Something is wrong"); |
| 174 | +-- |
| 175 | +-- As such it's more likely to be used in a UVVM style testbench. If that is |
| 176 | +-- the case there are a number of options. For example, |
| 177 | +-- |
| 178 | +-- * Don't use error, use alert instead |
| 179 | +-- * Use the selected name uvvm_util.methods_pkg.error |
| 180 | +-- * Make a shorter alias like uvvm_error of the selected name |
| 181 | +-- * Instead of using vunit_context you can use vunit_run_context |
| 182 | +-- which allows you to create a VUnit style testbench that can be automated |
| 183 | +-- by run_uvvm.py but it doesn't give access to VUnit check and log functionality |
| 184 | +-- so there are no name collisions. |
| 185 | + |
| 186 | + |
| 187 | +-- Stopping the Simulation on Failing Checks |
| 188 | +-- ========================================= |
| 189 | +-- |
| 190 | +-- Whether or not to stop a simulation on VUnit detected errors is |
| 191 | +-- controlled by the global stop level and the severity of a failing check. The |
| 192 | +-- simulation stops if severity >= stop level. The default stop level is |
| 193 | +-- failure and the default severity of a check is error so by default a |
| 194 | +-- failing check doesn't stop the simulation. HOWEVER, the |
| 195 | +-- test_runner_setup procedure changes the stop level to error IF the |
| 196 | +-- testbench is called from script (it knows about this from runner_cfg). |
| 197 | +-- So when running from script it will stop on error. |
| 198 | +-- |
| 199 | +-- The reason for this behavior is that the goal of a run is to find out ALL |
| 200 | +-- the passing/failing test cases in the test suite. The only way to do this |
| 201 | +-- without scripting is to keep running on an error unless the severity of that |
| 202 | +-- error is such that there is no point in trying to proceed. Severity level |
| 203 | +-- failure is intended to express that level of severity which means that the |
| 204 | +-- default error level keeps the simulation running. The major drawback to this |
| 205 | +-- approach is that it may be hard to prevent the error state of a failing test |
| 206 | +-- case from causing secondary effects in following test cases and a misleading |
| 207 | +-- pass/fail report in the end. |
| 208 | +-- |
| 209 | +-- The Python scripting default behaviour is to restart the |
| 210 | +-- simulation after a stop caused by a failing check and then continue |
| 211 | +-- with the next test case which means that it can achieve the goal of a |
| 212 | +-- complete pass/fail report and at the same time also prevent error state |
| 213 | +-- propagation. So stopping on error severity is a good strategy when |
| 214 | +-- running from script. |
| 215 | +-- |
| 216 | +-- However, if you want the simulation to stop when running |
| 217 | +-- without script as well you can do that by changing the stop_level like this |
| 218 | +-- |
| 219 | +-- checker_init(stop_level => error); |
| 220 | +-- |
| 221 | +-- or the default severity of checks like this |
| 222 | +-- |
| 223 | +-- checker_init(default_level => failure); |
| 224 | +-- |
| 225 | +-- or raise the severity of specific checks with an extra parameter to the |
| 226 | +-- call. |
| 227 | +-- |
| 228 | +-- check(foo = bar, "Expected foo to be equal bar", failure); |
| 229 | + |
| 230 | + |
| 231 | +-- VHDL and Python Reporting |
| 232 | +-- ========================= |
| 233 | +-- |
| 234 | +-- Preferred VUnit usage is to do the "simulate everything" runs using the |
| 235 | +-- run_uvvm.py script. The script will report what test case is running, if it passed |
| 236 | +-- or failed, error message and call stack if it fails, and a summary at the end. |
| 237 | +-- If you have a failing test case you can start that in the GUI for debugging |
| 238 | +-- (see run_uvvm.py -h). When running the single test case in the GUI there is no |
| 239 | +-- need for progress and summary reporting. |
| 240 | +-- If you still want the VHDL to generate this kind of information you can enable the |
| 241 | +-- embedded runner_trace_logger and filter out everything but info messages to get |
| 242 | +-- progress reporting for currently running test case. Read the log user guide for |
| 243 | +-- details on how to do this. A final report can be created by using the |
| 244 | +-- get_checker_stat function. Read the check user guide for more information. |
0 commit comments