How Divante cares about quality? Part 2 – Testing

Picture of Damian Kowalski
Damian Kowalski

Testers Team Leader, Automation Test Engineer at Divante eCommerce Software House

Care to share?

Business requirements for online sales systems change often. New consumer behavior and opportunities for vendors require rapid changes. The consequence of software development is the increase in complexity of code and structure. It can even lead to a situation when modifications, such as new promotions and algorithms for calculating prices get out of developers’ control. Therefore, we do everything to prevent the negative effects of software that contains errors.

A sales platform that doesn’t work (is unavailable), brings tangible losses. If a store website loads too slowly or displays error messages, it’s impossible to add products to the shopping cart and place orders. Even short interruptions in operation can cause customers to give up and move to competition.

If a website works properly, but the prices are badly calculated – customers can exploit the vulnerability and place their orders. Such situations happen and concern even the largest stores (e.g. the failure at in 2011.). The necessity of paying extra to realize the orders, loss of confidence, bad customer reviews and, in extreme cases, court proceedings are examples of the effects of software errors.

People with many years of experience in computer science say that there is no software without bugs. When creating new functions it’s impossible to anticipate all scenarios. However, you can and should do everything to prevent errors or reduce them to an absolute minimum.

A good project and using models when creating the application code are proven, but not sufficient, ways to avoid problems. Regardless of the project methodology used, we shouldn’t forget about tests.

Software testing should not be a one-time process. Practice shows that most systems are thoroughly tested uponcompletion and acceptance by a client. Such approach – the same happens e.g. in the construction industry – is not exactly correct in software engineering. Software development doesn’t end with the first version. New features appear due to changes in requirements; new bugs appear along with them. The quality of software which is not tested during development may be degraded.

In addition to new functions, changing operating conditions also affect the way a system works. Increase in the number of users, orders, products in the database, all of the above may result in slower performance and decrease bandwidth. Applications may behave differently in test environment (where they are accepted by a client) and in production environment (on target servers, which are open to users).

We need to keep the test version working, as it is a perfect “testing ground” for the team, the so-called stage which would reflect production environment. This version can be subjected to all kinds of tests, without having negative influence on users.

We need to test system components that change and are crucial to its operation. All the tests that are carried out involve costs and team’s commitment, so it is necessary to preserve attention to detail and control throughout the whole process.

UI Tests

UI tests are conducted carried out with prospective users, based on interface prototypes (e.g. functionality models). If they’re performed before the start of deployment, implementing improvements and changes to an application shouldn’t involve high costs.

The test consists of test-takers issue specific commands, for example: “Purchase a personal computer.” The user selects – using the prototype – the way he/she will find a PC in the shop. Then, which buttons and forms will be used to make the purchase. The users are observed by researchers – mostly the prototype designers or psychologists. Their goal is to spot moments of frustration, confusion or misunderstanding. Sometimes tests are recorded on video for later analysis.

Carrying out the tests on a group of 3 – 5 people usually allows finding more than 80% of errors.

Ergonomic errors in e-commerce influence directly lowering the conversion rate (conversion determines what percentage of visitors make purchase). Currently, the value of usability is widely appreciated. User interface tests are a basic tool to enhance user experience.

Functional testing, basic scenarios and pathways tests

Today the Internet can be browsed on almost any device equipped with a screen – from classic PCs, through mobile phones, to recently popular “smart” watches.

Each of these devices has different software (Windows, Mac OS, Android, Windows Phone or dedicated software), screen resolution or type and version of web browser. To know how our application behaves on these devices we should have at least one unit of each group. However, maintaining and purchasing such devices is a considerable expense. Therefore, it’s cheaper and more convenient to use emulators of these devices. For operating systems other than ours it can be a virtualization package (VM Player, Virtual Box), for web browsers and their different configurations (version, type, operating system) and for mobile devices – an SDK supplied by the manufacturer of the software (e.g. Android SDK). However, don’t forget about the necessity of testing on “real” hardware as well. If not in all combinations, at least do it on the units representative for a given group of devices.

After the end of implementation, a list of test scenarios should be created based on the application design.

Scenarios include all functional paths of an application that a user can go through during normal use of a system. Functional testing is a basic tool to check if software is working properly. Performing them is basically clicking through all the scenario steps.

It is worth paying attention to creating appropriately detailed test scenarios. Not only will they facilitate accepting the application by our client, but also indicate which system behavior is correct and which isn’t. Scenarios developed within acceptance tests (tests performed on delivery of a system) will apply during the entire maintenance of an application.

Regression tests and smoke tests

After introducing new features or optimizing the existing ones, so-called regression tests should be carried out. Their results show if all the basic functions work properly after the changes. We need to know that the introduced modifications didn’t disrupt the operation of other elements of the application. Regression tests can be conducted on the basis of test scenarios used for functional tests.

Testing should carried out before making changes to a production server and immediately afterwards, for additional verification. Thanks to systematizing what is tested, we make sure that we didn’t miss anything important.

Integration tests

Integration with external systems constitutes the most vulnerable to errors element of implementation. It also lasts the longest and is often the most difficult.

Along with technological challenges (which usually aren’t very serious) come communication problems and the need for cooperation between companies responsible for integrated components. In practice, the connections between systems are often subject to breakdowns, e.g. related to online network. All the communication paths between systems – e.g. with ERP (Enterprise Resource Planning) system – should be functionally tested.

The simplest version of integration testing consists of checking whether after placing an order it is transferred to the accounting program or whether inventory values are transferred properly to a storage system, etc.

As part of application monitoring, communication with systems should be checked regularly. It is worth performing automated integration tests.

The most important thing is to detect an error before a user does it. Integrations are usually described in detail in the RP, established on a system design stage. The RP should be the basis to write any integration.

 Your website needs some testing? We can definitely help. Write to us.

Published July 22, 2015