You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: addOns/commonlib/src/main/resources/org/zaproxy/addon/commonlib/internal/vulns/vulnerabilities.xml
+18-18Lines changed: 18 additions & 18 deletions
Original file line number
Diff line number
Diff line change
@@ -611,13 +611,13 @@ Web application functionality that is often a target for automation attacks may
611
611
* Web-based SMS message sending - attackers may exploit SMS message sending systems in order to spam mobile phone users
612
612
</desc>
613
613
<solution>Implement strong anti-automation measures to defend against malicious attempts to automate processes. Consider the following strategies:
614
-
1. **CAPTCHA:** Integrate CAPTCHA challenges in critical forms to distinguish between automated bots and human users.
615
-
2. **Rate Limiting:** Enforce rate limiting on various actions to prevent rapid and repetitive requests which makes the automated attacks less effective.
616
-
3. **Behavioral Analysis:** Implement behavioral analysis tools to detect patterns indicative of automation, such as unusual navigation sequences or form submissions.
617
-
4. **Device Fingerprinting:** Use device fingerprinting techniques to identify and block requests from suspicious or known automated sources.
618
-
5. **Multi-Factor Authentication (MFA):** Introduce multi-factor authentication to add an extra layer of security, making it harder for automated attacks to succeed.
619
-
6. **Honeypots:** Deploy honeypots or hidden fields in forms to trick automated bots into revealing their presence, allowing for appropriate action.
620
-
7. **Monitoring and Logging:** Regularly monitor and log user activities to identify and respond to unusual patterns indicative of automation.
614
+
- CAPTCHA: Integrate CAPTCHA challenges in critical forms to distinguish between automated bots and human users.
615
+
- Rate Limiting: Enforce rate limiting on various actions to prevent rapid and repetitive requests which makes the automated attacks less effective.
616
+
- Behavioral Analysis: Implement behavioral analysis tools to detect patterns indicative of automation, such as unusual navigation sequences or form submissions.
617
+
- Device Fingerprinting: Use device fingerprinting techniques to identify and block requests from suspicious or known automated sources.
618
+
- Multi-Factor Authentication (MFA): Introduce multi-factor authentication to add an extra layer of security, making it harder for automated attacks to succeed.
619
+
- Honeypots: Deploy honeypots or hidden fields in forms to trick automated bots into revealing their presence, allowing for appropriate action.
620
+
- Monitoring and Logging: Regularly monitor and log user activities to identify and respond to unusual patterns indicative of automation.
621
621
622
622
A combination of these measures is often more effective in preventing automated attacks.</solution>
@@ -1072,15 +1072,15 @@ Before parsing XML files with associated DTDs, scan for recursive entity declara
1072
1072
Multi-tier fingerprinting is similar to its predecessor, TCP/IP Fingerprinting (with a scanner such as Nmap) except that it is focused on the Application Layer of the OSI model instead of the Transport Layer. The theory behind this fingerprinting is to create an accurate profile of the target's platform, web application software technology, backend database version, configurations and possibly even their network architecture/topology.</desc>
1073
1073
<solution>Implement measures to obfuscate or disguise information about the system's platform, web application software technology, backend database version, configurations, and network architecture/topology. This can include:
1074
1074
1075
-
1. **Platform and Software Diversity:** Use a mix of technologies and platforms to make it harder for attackers to build an accurate profile.
1075
+
- Platform and Software Diversity: Use a mix of technologies and platforms to make it harder for attackers to build an accurate profile.
1076
1076
1077
-
2. **False Information:** Introduce fake or misleading information in system responses to confuse fingerprinting tools.
1077
+
- False Information: Introduce fake or misleading information in system responses to confuse fingerprinting tools.
1078
1078
1079
-
3. **Response Randomization:** Randomize certain elements in responses to make it difficult for attackers to consistently identify the system.
1079
+
- Response Randomization: Randomize certain elements in responses to make it difficult for attackers to consistently identify the system.
1080
1080
1081
-
4. **Firewall Rules:** Implement firewall rules to block or limit the effectiveness of fingerprinting techniques.
1081
+
- Firewall Rules: Implement firewall rules to block or limit the effectiveness of fingerprinting techniques.
1082
1082
1083
-
5. **Regular Updates:** Keep software, platforms, and configurations up-to-date to patch known vulnerabilities and prevent accurate identification based on outdated information.
1083
+
- Regular Updates: Keep software, platforms, and configurations up-to-date to patch known vulnerabilities and prevent accurate identification based on outdated information.
1084
1084
1085
1085
There is no one-size-fits-all solution, and a combination of these measures may be most effective.</solution>
@@ -1119,12 +1119,12 @@ A Web application should invalidate a session after a predefined idle time has p
1119
1119
<alert>Insecure Indexing</alert>
1120
1120
<desc>Insecure Indexing is a threat to the data confidentiality of the web-site. Indexing web-site contents via a process that has access to files which are not supposed to be publicly accessible has the potential of leaking information about the existence of such files, and about their content. In the process of indexing, such information is collected and stored by the indexing process, which can later be retrieved (albeit not trivially) by a determined attacker, typically through a series of queries to the search engine. The attacker does not thwart the security model of the search engine. As such, this attack is subtle and very hard to detect and to foil - it’s not easy to distinguish the attacker’s queries from a legitimate user’s queries.</desc>
1121
1121
<solution>Implement measures to secure indexing processes and prevent unauthorized access to sensitive information. The following strategies can be considered:
1122
-
1. **Access Controls:** Review and restrict access permissions for the indexing process to ensure it only accesses and indexes files that are intended to be publicly available.
1123
-
2. **Robot Exclusion Standard (robots.txt):** Use the robots.txt file to explicitly specify which areas of the website should not be indexed by search engines.
1124
-
3. **Authentication Requirements:** Implement authentication mechanisms for accessing sensitive files, ensuring that only authorized users or processes can retrieve them.
1125
-
4. **File Encryption:** Encrypt sensitive files to protect their content, even if an unauthorized user gains access to the files.
1126
-
5. **URL Redaction:** Avoid including sensitive information in URLs or filenames that might be exposed during indexing.
1127
-
6. **Security Headers:** Utilize security headers (e.g., Content Security Policy) to control how browsers and search engines interact with your website.
1122
+
- Access Controls: Review and restrict access permissions for the indexing process to ensure it only accesses and indexes files that are intended to be publicly available.
1123
+
- Robot Exclusion Standard (robots.txt): Use the robots.txt file to explicitly specify which areas of the website should not be indexed by search engines.
1124
+
- Authentication Requirements: Implement authentication mechanisms for accessing sensitive files, ensuring that only authorized users or processes can retrieve them.
1125
+
- File Encryption: Encrypt sensitive files to protect their content, even if an unauthorized user gains access to the files.
1126
+
- URL Redaction: Avoid including sensitive information in URLs or filenames that might be exposed during indexing.
1127
+
- Security Headers: Utilize security headers (e.g., Content Security Policy) to control how browsers and search engines interact with your website.
1128
1128
1129
1129
A proactive and layered approach to security is crucial for safeguarding against Insecure Indexing.</solution>
0 commit comments