Verification

Intro

Verification of a function can often be as time consuming as its development; therefore, it is important to have appropriate tool support where it is possible to automate and reuse as much as possible throughout the process.

A commonly accepted theory is that the cost of fixing a bug rises exponentially as the project progresses, which is why performing verification as early as possible is imperative in order to lower the risk of bugs surviving until the final release.

It is also easier to troubleshoot issues when functions are tested in each step of the model based development process. An example of a model based development process is documented here in steps 2 through 8.

Within model based development and testing, the model is the center of attention. In part, the model of the function which is developed and, even more importantly, the model of the plant which is to be controlled by the function. This model can be more or less complex, depending on what is at hand. In some cases, a set of parameters may be enough; in others, it may be necessary to use advanced dynamics and logic models in order to create a correct feedback behavior.

With a well designed model based process, it is possible to use the same plant model throughout the entire toolchain, possibly performing only small changes in order to satisfy subfunctions.

Model-In-the-Loop Simulation

Once an initial model of the algorithm or control system has been created, it must be verified that it operates as intended and that it satisfies all function requirements.

The possibility to simulate the model is normally provided by the modeling environment; however, in order to perform a simulation, some type of stimuli to the model is required. In some cases it could be parameters that are provided but, in many cases there is also a need for a feedback system where a model of the system to be controlled and any related help functions are simulated. This latter model, which describes the behavior of the system to be controlled, is usually called a plant model.

By using a simulation environment and test execution tools which support standardized interfaces, such as ASAM XIL-API, it is possible to reuse all tests developed throughout the entire development process. Because different modeling environments have different strengths and weaknesses, it is also advantageous to be able to use models from different tools, for example through the Functional Mock-up Interface (FMI) standard, and then connect these for integration in a common simulation.

A simulation where the algorithm or function being developed is represented by a model is commonly called a Model-In-the-Loop (MIL) Simulation.

Software-In-the-Loop Simulation

When code is automatically generated in a modeling environment, it will represent the behavior of the model it was generated from. This does not necessarily mean that model and code will always display identical behavior. It is because of this that it is also necessary to perform simulations using the generated code. Such a simulation is called a Software-In-the-Loop (SIL) Simulation and is performed with regard to the same plant model used in the MIL simulation.

During MIL simulations, one normally uses floating point implementations everywhere in order to introduce as few error sources as possible. However, the target hardware might not be a floating point unit, and therefore the results of the SIL simulation may be subject to scalings to data type interval which, if not properly configured, can lead to large deviations from the intended behavior.

Even if the target hardware has a floating point processor, fixed point types are often used extensively for calibration parameters and look-up tables in order to decrease memory usage. There is also the possibility that the code utilizes external computation libraries, different from those used in the model.

With the aim of finding the consequences which, e.g., fixed point implementations impart to the behavior, it is advisable to simulate the code (SIL) and compare it to the simulation of the model (MIL). Such tests are usually referred to as “back-to-back” tests. This is a great advantage of using floating point numbers for the model, since it makes it easier to identify potential problems and possibilities for optimization.

Processor-In-the-Loop Simulation

In order to verify that the compiler and processor do not introduce any errors in behavior, it is also possible to integrate a processor evaluation board into the simulation loop.

The code is then executed on the target processor, while the same plant model as in the MIL and SIL cases is still simulated on the PC. This type of simulation is called a Processor-In-the-Loop Simulation.

Performing PIL simulations also provide valuable information on memory consumption, execution times and stack utilization for the function.

Integration Testing

When multiple functions are developed independently it is necessary to, in addition to performing tests on them individually, perform testing on the integration of the functions. Instead of doing this at the point of implementation into the target hardware, integration can initially be done on the PC.

Performing a simulation of the integrated code on a PC provides a virtual validation of the control unit. This is a key component in order to test the impact of a function update in an ongoing process commonly called continuous integration.

Choosing a platform which supports standardized interfaces, e.g. ASAM XIL-API, enables the reuse of tests developed for other platforms.

Some advantages of performing integration tests in a virtual environment:

  • Integration tests before the target hardware is available
  • Disconnection from hardware implementations such as drivers and electrical interfaces
  • Faster-than-real-time executions
  • Excellent debugging possibilities

Hardware-In-the-Loop Simulation

Once the integration tested code is implemented into the target hardware, it is time for the final verification. Parts of this verification is certainly performed with the intended plant connected, but there are many advantages to substituting a laboratory environment for the real plant, where the rest of the machine is also simulated. This type of simulation is called a Hardware-In-the-Loop (HIL) Simulation.

HIL simulations present the possibility to create automated tests which can be reused for different software and hardware versions. Here it is possible to test both functionality and electrical interfaces.

A large part of the software is commonly comprised of diagnostics and, in order to verify their behavior, errors must be injected. Examples of such errors are short-circuits, cut cables or incorrect values.

The plant model utilized in earlier stages must here be supplemented by an I/O model which maps signals to the physical ports of the simulator.

Test Automation

In the process of verification, repeatability is key. In order to achieve repeatability, enabling so-called regression tests, it is necessary to use automation.

Most testing tasks can be automated via test scripts; however, scripts are often hard for testers to understand, and it is difficult to get an overview of which parts are intended for reuse in different test cases. When automating tests, the framework for development and execution of test cases is important. A good framework for development of test cases provides basic functionality for e.g. reading and writing to a real-time application, as well as recording data and subsequently representing it in the form of graphs, tables or pure numbers in a report.

Maintaining and developing an automated test environment it is advantageous to have well defined roles and responsibilities with clear interfaces for information transfer. An automated test process is comprised of a number of sub processes. The responsibility for the test process is held by the role of Test Manager. The Test Manager sees full picture and coordinates the sub processes for the test developers, framework developers and simulation environment developers.

With an automation environment which supports standardized interfaces, tests can be reused throughout the entire development process.

Test Management

Testing activities play a substantial role in ECU development projects. It is common to perform so-called requirement based testing, where the goal of the test cases is to verify that the target system fulfills its established requirements.

In an iterative development environment requirements are continuously updated, which means that, for every iteration, it is necessary to evaluate whether a test case is still valid, must be updated, or substituted by a new test case. This process leads to dependencies between different versions of requirements and test cases.

For the test activities, the requirement is held static in each iteration, while the test case implementation and its generated results can be updated. The resulting dependencies must somehow be handled in order to maintain traceability. A test case is defined by a set of attribute, e.g. name, execution environment, implementation, and results.

One great challenge is how to handle all these dependencies and data. A database is required, but also a meta model for how things are related so that there is traceability between versions of requirements and versions of test cases. The end goal is to have the possibility to recreate the testing environment which was used for a specific version.

So, any test management tool should be able to store all relevant information, but also make use of it by providing a connector to testing environments. In the end, tests are scheduled and a report is returned detailing the test results. Through version dependent connections it is possible to track how different software versions fulfill different requirement versions.

Menu