This topic describes the website protection features supported by the new edition of Web Application Firewall (WAF).
Method | Description |
---|---|
Configure basic web protection | This feature protects your websites against common web attacks based on built-in protection rules. The common web attacks include SQL injection, cross-site scripting (XSS) attacks, webshells, command injection, backdoor isolation, invalid file requests, path traversal, and exploitation of common vulnerabilities. |
Configure custom protection policies | This feature allows you to create custom access control rules or rate limiting rules based on precise match conditions. |
Configure a whitelist | This feature allows you to configure a website whitelist. The requests that match the rules configured in the whitelist are not checked by all protection modules of WAF and are directly forwarded to origin servers. |
Configure an IP address blacklist | The IP address blacklist-based protection policies block requests from specified IPv4 addresses, IPv6 addresses, or CIDR blocks. You can specify the IP addresses or CIDR blocks based on your business requirements. |
Configure a region blacklist | WAF provides the region blacklist module. The module can identify the source regions of requests. You can configure the module to block or allow requests from the specified regions. This way, malicious requests can be blocked by region. |
Configure the bot management module | The bot management module of WAF allows you to configure anti-crawler rules for websites and apps. If your web pages, HTML5 pages, or HTML5 apps are accessible from browsers, you can configure anti-crawler rules for the websites to protect your services from malicious crawlers. You can configure anti-crawler rules for your native iOS or Android apps to protect your services against crawlers. HTML5 apps are not native iOS or Android apps. |
Configure scan protection | Identifies the scanning behaviors and characteristics of scanners to prevent attackers or scanners from scanning websites. This reduces the risk of intrusions into web services and blocks invalid scanning traffic. |