Highly abstracted model-based design is a popular and efficient way of developing applications. You can design your model using (graphical) tools that closely reflect your application architecture, with all the advantages that brings. However, your model will ultimately have to be translated into executable machine code – a multi-step process that can introduce software artefacts. So how do you make sure that any artefacts introduced don’t impact your application?
The higher the level of abstraction, the greater the number of steps involved in translating the model into executable code. There is virtually never a direct single-step translation from the model to target processor machine language. As a minimum, it usually requires two steps. The first step takes each block in the model and converts it into a block of ‘universal language’ C or C++ code. The second step takes the generated C or C++ code and compiles it to the required machine code. The reason the intermediate code is nearly always C or C++ is that compilers for these languages are available for virtually any target architecture, leaving users free to choose a C or C++ compiler of their choice – one that suits their target.
Lost in translation
While this approach appears to offer the best of both worlds – high-level abstraction coupled with a very wide choice of target architectures, – there are potential issues with it when it comes to testing whether the target code runs correctly. You will, of course, have a set of model and requirements-based tests to work with, but even if these test sets cover all the structural requirements, input conditions and state transitions of the model, that does not guarantee that all the generated machine code is verified. Compilers can and do introduce code specializations, especially when optimizers are employed, which means there is no longer a one-to-one match with the source code, and consequently not with the model. This one-to-one relationship gets lost in translation.
So can you trust untested code? In a mission-critical setting, or most settings for that matter, the answer is no.
There are two ways to resolve the issue. The first is to monitor the assembly code and develop new tests for specialized and previously untested cases. The downside of this approach is that it’s time-consuming and the test set you develop will be specific to both the compiler and the compiler settings. A better and more automated approach is to increase confidence in the compiler by independently verifying that it correctly translates code whatever you throw at it. You can do that with SuperTest, because SuperTest is a very large and extensive collection of tests that verify the front-end, optimizers and code generator of C and C++ compilers. Using SuperTest will give you peace of mind that the intermediate code to target code translation is as error-free as possible.
If you want to know more about verifying compilers and our SuperTest test suite for C and C++ compilers, don’t hesitate to contact us.