Here are some ABAP Unit guidelines I experienced as useful in my development projects. I have arranged them in an question and answer style. Everybody is invited to complete the list by adding further Q&A's
In my project, we are using only classical ABAP reports. So I unfortunately can't use ABAP Unit, can I?
There are good news: Whatever kind of ABAP code object you are using for your development, you always can make it more stable and better extensible by adding unit tests. For reports, module pools and function groups, you add the unit tests in form of hand-written local classes.
For a simple case, assume that you are in a report and you would like to test the most straightforward call of subroutine xyz. Then the following code skeleton will do - it could be worth to define it as a code template for easy insertion into your reports:
Of course, there are advantages when using ABAP objects: for example, you have the automatic generation of unit test class templates for an ABAP class. Also, the separation of code between the productive code and the test code is clearer. The test classes are generated in an own include section which is preserved for ABAP Unit. But if for some reasons you don't want to use classes, you still have ABAP Unit support.
Recommendation: If you need a report or module pool for the user interface, use it only as a skeleton for consuming the events like "START-OF-SELECTION" (and for painting dynpros etc.), and call methods of specialized objects as soon as possible to do the proper logic (selections, business rules).
Unfortunately, my customer has such a poor master data quality in his development system that I can't use unit tests!
Again, there are good news: However poor the data quality of your customer's development system is - you can always do unit tests! This is one of the big strengths of unit tests, that they test a single code unit isolated from the rest of the world - the test performs independently from any database entry, and independently from other function modules that are called on the way. If the test is OK in client 500, then it runs fine in client 000 too!
You don't really want me to write unit tests for all the code objects I have to develop, don't you?
No, not really. There are many parts of the code for which it would be a waste of time writing unit tests for them. Among these are:
- Automatically generated code, like e.g. view maintenance function groups, reports of the statistics info system, base classes of BSP extensions, and the like.
- Most database selections. In most cases, database selections should not be performed in unit tests (see below on how to do this). There are exceptions, for example DAOs. These are experts for a single database table. It could make sense in these exceptional cases to create test entries (and remove them at teardown) for testing the functionality.
- Code for linking dynpro events to ABAP code. There is some glue code necessary to redirect a dynpro event like a certain chain-request in PAI to the piece of ABAP code that should react on it. It usually is not worth the effort to unit-test this glue code.
What this class is doing, is really so trivial that it's not worth to make a test for it.
Maybe you're right. But usually, you are not. You only think your code to be trivial, because you just finished writing it. Experience shows that, after a year, the formerly trivial code doesn't seem trivial to you any more. Your colleague might not find it trivial either. If you are implementing only an adapter class which maps one data format to another and then calls an API, you are probably right: A unit test for such a class would probably be overdone. But with increasing source code size, also seemingly trivial code may contain some bugs which only reveal when it is called. Why not implement such a call and check the expected result automatically, so that it can be proved at any time that the class works fine!
Is test-driven development (TDD) necessary for working with unit tests?
TDD as a programming practice essentially means to implement the test first, and then to add the productive code which makes the test pass. It's a ping-pong process, you will always be swinging between new test code and new production code. You don't need to do TDD, but if you get accustomed to it, it will help you a lot in avoiding bugs and thus becoming more efficient. If you don't work test-driven, you still have benefit of unit tests: You can add tests post-hoc to already existing code objects.
What about external tests of a unit, using separate test objects?
You can specify a class as unit test class in its properties tab. But this is thought for extracting similar test code from several local unit test classes, using inheritance. It is not thought for testing a unit externally. Generally, testing a unit externally is not recommended, since the unit tests then are not available via the menu path "Module test" in the workbench. If your class is changed by someone, he might not be aware that the code should pass the tests of the external program. Thus it is better to have it in the same object as the productive code itself.
If I don't test all my code, there will be gaps in my test coverage!
Although a very useful tool, unit tests are not the answer to all prayers. The gaps I mentioned above for which unit tests are not recommended, should be covered by other test techniques, namely integration tests like eCATT, QTP or others.
How should I design my unit tests?
The main point is: You should design them as simple as possible. A unit test also serves as a kind of documentation about the functionality of the unit. Also, if a unit test fails after a change has been performed, it should be easy to see from the code which functionality failed. Try to avoid all kinds of redundant code in your test methods. Delegate repeating code into methods or even macros to make the essence of the functionality under test more readable. Use the freedom in naming variables, methods, classes and macros to make the code as expressive as possible about what the test is doing.
Each feature of your unit should be testable with the following three steps:
- Setup the test data - fill internal tables or attributes of the interface parameters and/or of your stubs.
- Call the test method - usually there will be precisely one call of a public method
- Verify the expectations of the method output
These three steps are what should be contained in a test method. Around each test method, there is a setup step (common for each method of the class) where usually the object under test is constructed, and stubs are provided if necessary. Also, each test method call is followed by a teardown call (which will be used only exceptionally).
How can I find out that my method is actually called by a unit test and not by a real user? I would like to do some things differently in this case?
Don't do this! Don't mix productive code and test code. If you want to eliminate parts of the productive code for your tests, use stubs and dependency injection instead. But using a "test mode" flag in the productive code, will spoil the concept of unit tests and make your code worse.
How should I organize my unit tests?
There is no universal scheme on how to organize the unit tests. Sometimes it is good to have a unit test class for each method, and a test method for each equivalence class of input data. But this cannot be made a general rule. In general, the test methods will be useful if they are orthogonal: Ideally, each method tests a single functionality independent of the others. Don't overload test methods with too many assertions.
How do I test a routine (or method, or function module) that performs database selection mixed up with its own business logic and with calls to other function modules?
By making the code testable first, for example using stubs: Delegate the database selections and function module calls into methods of local helper classes (I use one LCL_DB for database calls and one LCL_API for calls into other code units): extract these code parts into own methods. Use expressive names for these methods and use the adapter pattern to design a nice interface for them. The bodies of your LCL_API and LCL_DB will thereafter only contain database operations (select, insert, update, enqueue, ... ), external function module calls and maybe some few lines of mapping code which maps the good interface you designed to the legacy interface of the modules you are calling.
There are instances like go_api and go_db of these helper classes available globally in your object. In the setup step of your unit test, inject instances of subclasses lcl_db_test and lcl_api_test into the object under test. Redefine its methods and control their behaviour from the test methods.
The redefined helper classes like lcl_api_test and lcl_db_test is what the test people call stubs.
This sounds complicated.
You are right, it is not straightforward... In order to keep the test code simple and understandable, you should therefore try to avoid usage of stubs whenever possible. You can avoid stubs by providing a better separation between business logic, API calls and database calls. For example, instead of selecting a database table in the same method in which you are performing some checks on it, you can do the selection first and then check the data in an own method with the database entries it works on as import parameter. Making the code testable in this way, will usually - as a side effect - improve its readability.
Should I test protected or private methods?
Usually not. Usually, you will focus on the public interface of a class. A private attribute or method may disappear in a refactoring session, or be replaced by other components. If it doesn't have any influence on public method calls, you may delete it safely! If it has influence on public method calls, then test this public method - keeping the freedom for a future refactoring. If you test private components, then you will have to change the unit test for these components whenever you decide to change these components, resulting in worse changeability of the code.
OK - but I am in the special situation<blabla>and therefore I really need to test private and protected methods. How can I afford this?
As local classes are as separate from their containing workbench class as any other class, you need to declare the local test class a friend of the containing class. If zcl_testee is the containing class and lcl_test the unit test class, you need the following code in the local test classes section:
My unit tests contain a syntax error. But it doesn't matter for the productive class, since unit tests are for development systems only.
Not true! Unit tests cannot be executed in productive systems, but a syntax error in the unit part of a class will break the complete class, resulting in a short dump SYNTAX_ERROR for any access to its attributes or methods.
My test object is a singleton. For avoiding side effects, I want to create a new instance for each test method anyway.
If your singleton has global data, these may be changed by the tests, generating ugly dependencies between the test calls. You can avoid these dependencies by creating a subclass of your object under test with the property "create public", as follows.
If you only need this change of the class behaviour, you don't even need a "class ... implementation" part of the subclass.
Have in mind, however, that the troubles are not caused by the unit tests but by the global data. The unit tests only discover the problems, they don't cause them. Thus the best solution would be to eliminate the global data from the class.
What can I do to make my test code even more readable?
Use the implicit 'functional' notation for method calls whenever possible, in particular for calls like assert( ), assert_initial(), assert_subrc() etc.
If you don't need the inheritance hierarchy for your test classes (why should you?), you may let the test classes inherit from cl_aunit_assert. You can then write for example
Use macros to fill your internal tables and to call the test method, if this call is complicated (e.g. because it has many parameters). You save repetitive coding for the method call yourself, and you also save local variables like work areas for filling your internal tables. We are using a combination of a macro include and a subroutine pool for filling internal tables, saving the need of auxiliary local variables for the work areas.
If you need an example - here is a test method for a parser, converting a packing rule specified as free text into an internal table containing the relevant information in a defined format. Setting up the free text, calling the parser method and checking certain components of the result table were repetitive actions in about 20 different test methods, where only the content of the free text and the expected result in the internal table changed.
The macro _assert_n_fields_in_row checks the specified components of the specified row of the specified internal table to have the specified values!
Extracting the repetitive code in this way, reduces the method to the three steps "Setup - Test Call - Verification" mentioned above and thus clarifies the tested functionality.
Read good books like Martin Fowler's Refactoring or Robert C. Martin's Clean Code to get more ideas about how your code can be made more readable.
Isn't it bad for debugging if you use macros?
It depends. If you use the macros only for "removing noise", i.e. for extracting code sequences that remain the same all the time and are frequently used, then it's not a problem that you skip its execution with F6 in the debugger, since the macro hides uninteresting parts of the code. If you have a macro hiding a method call, like _call_parser in the above example, you can step into the method with F5, even if the call is hidden in the macro. Again, in this case, you lose only the uninteresting parts of the code.
Does it make sense to run the unit tests in a job periodically?
Usually, the unit tests are associated with the development of new code. Contrary to integration tests, there should be no surprise if they run in a nightly job, since the result only changes when the code changes, and therefore the last changer of the code should know the result. If he tested his unit! If you have developers in your team who do not work with unit tests, or if the last modifier of the code simply forgot to run the unit test, it's good to have a job messaging the failures (for example by sending an email to the TADIR owner). You can run the unit test using the code inspector.
Don't forget the pseudocode annotations about the risk level and the duration in your unit test class definition, because otherwise the code inspector may not execute the test:
Is it possible to check the unit tests when a transport order is about to be released?
Yes, and I think it is useful. The easiest way to afford this is to switch on the Code Inspector Checks for transport release and to chose "Unit Tests" in the check variant.
I had chosen a more complicated way in our implementation, using a BAdI of the transport organizer and a function module to invoke unit tests. Although the functionality is not guaranteed by SAP (as the short text contains the dangerous amendment "for SAP only"), it seems to work quite well. We are using it since two years with no problems now.
The method cl_aunit_prog_info=>contain_programs_testcode( ) may be used to find out whether a certain program (specified by the name of the report source of the main program) contains unit tests at all. You also have to find the super-object of a LIMU if only a part of a program, class or function module has been changed. For this, you can use the function module TR_CHECK_TYPE.