Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member249109
Active Participant
6,000
In the previous post I was talking about what makes great software, which according to the Head First OOA&D book that I'm following, it resumes to 3 aspects:

  1. Great software satisfies the customer.

  2. Great software is flexible.

  3. Great software is maintainable and reusable.


Now I would like to enhance this with something that, in my opinion, also takes a major role on this:

Great software is testable.


No matter how pretty your code looks, if you cannot properly test your program, it will be quite hard to maintain/enhance. This is a big problem for the users, just that they might not know or care, but also for you, because software and applications are constantly changing, evolving, in some cases mutating into some other thing very different from the original scope- let's get real here. And guess who will have to do the dirty work?




 

So often we find ourselves in one of these situations: Either we are asked to change our own code - more desirable, but likely that we won't remember anything of what we wrote at the time. Or we need to change someone else's code - less desirable, highly probable. How can we be sure that we won't break anything after implenting changes if the code cannot be tested? The traditional approach that I've seen over the course of 10 years is this:




  • Make some insignificant test in the development environment, sometimes check that there are no dumps.

  • Transport to quality environment, test again.

  • Deliver to the functional consultant, he/she runs the same test.

  • Release to the user. First test: runtime error.


"Of course, they were testing with X data set and we ran our tests with Y data set" will be the first excuse. And even if it's true, the real issue is that the software is weak, not robust, it can't be tested without real data. In other words, the program has dependencies, thus it is impossible to test.


I've been reading a lot about Test-Driven development, and nowadays I'm starting to feel like Neo in The Matrix: I know kung-fu. Seriously, in my experience, TDD is the way to go when it comes to delivering good, quality software.


How do you create good tests, then? Well this feels like kind of a journey, there are many things to learn. Once again, I couldn't recommend more bfeeb8ed7fa64a7d95efc21f74a8c135book ABAP to the future, it has been my guide and also inspired me to start writing these posts.


First of all, ABAP Unit does not have to be a pure technical tool, only available to developers. You don't need a developer key to run unit tests over an ABAP program, therefore anyone with the proper authorizations to display programs/classes should be able to run the tests. So what if we also involve our business experts and functional consultants?


Well, with Behaviour-Driven development we can accomplish this. bfeeb8ed7fa64a7d95efc21f74a8c135talked about it already back in 2013, I'm just discovering this now. BDD is all about simplification inside your test methods, so the methods labeled FOR TESTING in abap unit will have descriptions that make sense to developer, business analyst and user. You can accomplish this by setting each test method that complements the phrase "It should....". Then inside of this method, you create 3 helper methods following the pattern "Given .... (initial condition)", "When .... (method to test)", "Then .... (check result)".


This is how I refactored my test class for the ZCL_INVENTORY class:



class lcl_test_class definition deferred.

"Allow access to private components within the class
class zcl_inventory definition local friends lcl_test_class.

class lcl_test_class definition final for testing
duration short
risk level harmless.

private section.

types: ty_guitars type standard table of zguitars with empty key.

data: mo_class_under_test type ref to zcl_inventory,
guitar_instance type ref to zcl_guitar,

guitars type ty_guitars.
guitar_to_add type ref to zcl_guitar.
guitar_to_search type ref to zcl_guitar.
mo_exception_raised type abap_bool.
found_guitars type zcl_inventory=>guitars_tab.

methods:
setup,
"User Acceptance tests:

"IT SHOULD....................
add_guitar_to_inventory for testing,
add_duplicate_and_get_error for testing,
search_within_the_inventory for testing,


"GIVEN ..................................................
given_guitar_attribs_entered,
given_initial_inventory,
"WHEN ..................................................
when_guitar_is_added,
when_same_guitar_twice,
when_guitar_is_searched,
"THEN ..................................................
then_inventory_has_guitar,
then_exception_is_raised,
then_guitar_is_found,

"Other helper methods
load_mockups returning value(re_guitars) type ty_guitars.

endclass.

So the idea is to include "IT SHOULD" methods as they came right out of the functional specification document. In my example, The ZCL_INVENTORY class should be able to:




  • Add guitars to the inventory.

  • Protect the inventory against duplicate objects.

  • Search for a guitar within the inventory.


Let's see the implementation of the add_guitar_to_inventory() method:
  method add_guitar_to_inventory.

given_guitar_attribs_entered( ).

when_guitar_is_added( ).

then_inventory_has_guitar( ).

endmethod.

This reads like plain english: In order to add a guitar to the inventory, we start with some guitar attributes, after we add the guitar to the inventory and check that it was succesfully included. So you run a test for this method, and if there's no green light, you know that something is wrong with this part of the process.

The given_guitar_attribs_entered( ) method just intializes one guitar object
  method given_guitar_attribs_entered.

data: guitar_spec_attributes type zcl_guitar_spec=>ty_guitar_attributes.

guitar_spec_attributes-builder = zcl_enum_builder=>fender.
guitar_spec_attributes-model = 'Stratocaster'.
guitar_spec_attributes-type = zcl_enum_guit_type=>electric.
guitar_spec_attributes-backwood = zcl_enum_wood=>maple.
guitar_spec_attributes-topwood = zcl_enum_wood=>maple.

data(guitar_spec) = new zcl_guitar_spec( guitar_spec_attributes ).

data(guitar_record) = value zcl_guitar=>ty_guitar_attributes( serialnumber = 'FE34000'
price = '1745.43'
specs = guitar_spec ).
guitar_to_add = new zcl_guitar( guitar_record ).


endmethod.

 

The test method itself, the one from the class under test, is part of the "WHEN..." BDD description
  method when_guitar_is_added.

try.
mo_class_under_test->add_guitar( guitar_to_add ).
catch zcx_guitar.
"Oops
endtry.

endmethod.

Finally, we finish the process with a check in the inventory. The ABAP Unit assertions go into this part:
 method then_inventory_has_guitar.

data(guitar) = mo_class_under_test->guitars[ serial_number = 'FE34000' ].

cl_abap_unit_assert=>assert_not_initial( act = guitar
msg = 'Guitar is not in inventory' ).

endmethod.

 

Isn't it nice? We ended up with a nicely written Unit Test which is meaningful to the current developer and those who will come after, to the functional consultants, business users, and even managers.


Until next time!

 
10 Comments
Labels in this area
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy