The security of systems that allow you to shop online and process personal data is of paramount importance. Break-ins can lead to frauds or data theft, the consequences of which are hard to predict.
Performing in-depth application security testing requires a lot of knowledge and experience. If the system is very complicated and includes many dedicated (written specifically for our business) functions – tests can be lengthy and expensive.
Regardless of whether we develop software or its implementation is outsourced to an external company, it is worth to see a specialist (or a company) professionally auditing security and ask them to consult.
The vulnerability of our applications on the most popular attacks can be tested in a simple way, using free tools. Those work in the black box model. They don’t analyze how an application is designed, but automatically or semi-automatically “attack” it using specially crafted, potentially dangerous data and verify system response.
An example of such a tool is Skipfish. It searches the site for vulnerability to external attacks. Skipfish operates using heuristics. It automatically generates all kinds of dictionaries, completes forms on websites and uses various methods of attacks to test potential vulnerabilities on a website.
In addition to executing automated tests, it is worth using source code analysis (white box testing). Analysis of the source code allows the assessment of the use of good practices (data coding, etc.) and thus excluding a large amount of attacks. In this way, it is essential to audit systems that process transactions or perform financial operations. Unfortunately, source code tests are practically impossible to automate, so performing them is quite expensive.
OWASP TOP 10
With each project it is also worth going through the checklist prepared by the OWASP (Open Web Application Security Project).
The list contains 10 most common security errors in online applications. Vulnerabilities are grouped into major categories and sorted in order from the most frequently occurring. Although the list was last updated in 2013, it is still valid. Comparing with the last update (2010), the first 4 positions are identical, the only thing that changed is their order.
The most common error in applications is an Injection type of vulnerability: “injection errors”. It includes any attempts to change the behavior of a system or language interpreter. The attack relies on an appropriate preparation of the data going into a system as part of a query or command. Poor handling of the external data (lack of proper filtering) can lead to performing third-party operations (executing code sent by the attacker – a hacker). Running such a program/command may facilitate access to confidential information or execution of unauthorized actions on the system.
Second commonly occurring vulnerability is the ability to call between-service scripts (XSS-cross site scripting ). A website with such gap allows executing scripts or HTML code in the victim’s browser (quite the opposite as in the case of injection vulnerabilities, where the code was executed on a server). This script can intercept user sessions and send their data to the attacker, replace the content of a site or redirect a user to a page containing malicious scripts or software.
The third most common mistake is inadequate or incorrect handling of authentication and session. Through programmer’s oversight, the user’s session can be easily captured. The most basic example is sending to a third party the link to a resource page available only after logging in. In this case, in the absence of adequate safeguards, one will be logged to a session of the user sharing the link.
In Divante, OWASP TOP 10 list in a slightly simplified form is one of the checklists which must be met by a system prior to implementation.
A useful tool for finding such vulnerabilities is ZAP (Zed Attack Proxy) created by OWASP foundation. It acts as a proxy server that in the background analyzes the content exchanged between a browser and a server searching for vulnerabilities. It offers, as in the case of Skipfish, automatic scanning and dictionary attacks that can deliver very surprising results, for example disclosure of files in the so-called deep secret.
Note, however, that the tests performed by automatic scanners simply complement the tasks performed by a pentester.
Using the systems that are ready (open source, dedicated solutions) allows you to reduce the time spent on testing. We also receive free patches if problems are detected. It is worth rethinking this strategy when planning any implementation.
Error handling tests
Error handling tests check how the system reacts to incorrect data given by a user, incorrect URLs, lack of authorization, network problems (e.g. problems with connection to a database server).
The occurrence of exceptional circumstances is certain – but postponed to an unspecified moment. It is worth knowing how users will see a message when incorrect data is entered. If a database fails – can you restore it ? Will users be informed about the problem in an appropriate way? Pages with the information about maintenance break or switching users to a backup server are just some of the actions that can be taken in the event of a breakdown.
The worst and the least professional is ignoring emergency situations and relying on automatic errors (often in English) leaving the confused user without support.
In Divante, in order to avoid such situations a Strike Group has been established – S.W.A.T Divante. There are 3-4 people in it and participation is voluntary. People belonging to the group are supposed to keep the SLA for maintenance of the platforms we service – at all costs and using all available and legitimate means. The group has access to system administrators and external consulting company. S.W.A.T Divante is the first line of error handling – it reacts by doing all possible tasks at any time of day/night to solve the problem as soon as possible.
Automated tests – selenium
In the previous entry, I mentioned the need for functional testing. However, checking the same paths every time can be frustrating for both testers and developers because of the monotony and repetitiveness of the task. This can lead to a more cursory check of a website or not testing all critical paths. Here’s when automated tests come to help.
In Divante we design and develop automated tests at an early stage of implementation. This provides relief to testers and developers, because the machines check all the basic path in the system, and the specialists can use this time to focus on new features.
Since Divante mainly deals with web applications, it seemed right to develop in the direction of automated tests written in selenium. This tool, created by Jason Huggins, was made public in 2007 on the Apache 2.0 license and quickly gained a lot of supporters, becoming one of the main instruments used in testing web applications.
It consists of three main components:
– selenium IDE (integrated development environment) – FireFox plugin
– selenium RC (remote control) – allows you to run tests written in selense under any browser, also called
– selenium WebDriver – an expanded version of selenium RC, allowing you to program tests in popular programming languages (Java, C#, Python, Ruby, Perl, PHP), also called selenium 2.0
In order to start writing tests in Selenium (IDE), you don’t need to be a software developer. Just go through any script with the extension enabled – it’s enough for simple actions. Unfortunately, such tests are then hard to adapt to changes in an application. Therefore, consider moving the tests to the next level and develop them using WebDriver from the outset.
Such tests may be connected to a continuous integration system, which allows monitoring applications in the same manner in which their users do it. This is an additional measure to ensure the highest SLA quality.
Continuous integration – Jenkins
Continuous integration is not only providing “fresh” releases and selenium tests. Thanks to continuous integration we can quickly find out about the condition and security of applications. With a good setup of all the tools we can say that to a large extent an application (or at least its most crucial areas) tests itself. CI with each new code package provided by the developer checks the syntax and code quality (detecting repetitions in the code, generating documentation). There are no obstacles to periodically test (e.g. late at night) the performance of applications, checking the code correctness at the same time. Any deviation from the norm is altered appropriately by sending an e-mail to the team or through instant messaging (HipChat, Slack, Hangout). It is also good practice to use CI to perform regression testing.
System backups – that are usually done – are a very important element affecting the continuity of operations. Thanks to them we can recover from unexpected events – such as system failure or a break-in attempt.
What is certain is that we have to create backups. But we also have to be sure whether these copies can be retrieved, if they’re free of errors or whether using them is too difficult and time consuming?
Such backups typically consist of:
– copies of a system database
– copies of applications files and files uploaded by users
– configuration and script copies used to run and maintain a system.
If the application is well-developed and uses several servers, restoring it from a backup can be a difficult process. In the case of trading systems no orders should be lost, but complex application means also complex configuration files and even whole startup procedure.
Why is it important to perform recovery tests?
On the one hand, they allow us to check that backups are complete and correct. On the other hand, we can see how long the whole procedure takes.
Applications are created to achieve certain business objectives. Investing in system development must pay off, and such return on costs is usually dependent on servicing a certain number of users and transactions.
In order to minimize the costs and time devoted to testing we can verify maximum values that a system can handle using automated testing tools. Their operation is similar to security testing devices combined with automated behavioral tests.
In other words, servicing the expected number of concurrent users is easily tested. The tool that we can use to do it is jMeter.
The software is restricted by the performance of a device on which it is run and bandwidth. We are easily able to generate traffic of up 10 000 concurrent users on a page or as many unique entries. The victim of an attack may be the system as a whole (individual page entries, moving through certain paths, placing orders) or its components (application or media servers, a database). Performing such tests we can fine-tune devices to the maximum efficiency of their operation in a quick and easy way. The tests show how the system would perform in the event of an overload (congestion), or what will happen in the case of long-term heavy load (e.g. during a sale). In this way, we also get to know the maximum physical capabilities of the hardware.
Load tests should be performed in an environment as close as possible to the production environment or the production environment before deployment. After all, we want to know the real capabilities of a system, its weaknesses and bottlenecks.
In addition to testing, it’s important to monitor server parameters before a release. Monitoring CPU load, memory usage and hard drive usage will allow us to spot problems earlier.
Keep in mind that you cannot attack pages or applications if you don’t have a written permission to do so. This applies to both performance and security testing. According to the Penal Code, failure to comply with this rule can result in a fine, restriction of personal liberty or imprisonment up to 2 years.
More in our previous articles:
How Divante cares about quality? Part 1 – Documentation
How Divante cares about quality? Part 2 – Testing
Need our support in developing your e-business? Write to us.