RSA WTD Rules for different pages
Hi folks, how are you doing?
Recently I needed to create a rule in WTD to evaluate reset password pages, the reason for that is because attackers were trying to enumerate these pages, that at the end of the URL the site uses a CPF as user identification, CPF in Brazil is similar to social security numbers and has 11 digits, the URL is as follow /v1/password/verifyPassword/01234567890, for instance. The case here is that malicious users were trying to enumerate the number at the end, so the URL constantly changes all the time, first thing we thought would be create a regex to evaluate the page and create a counter in this rule to be used in another rule that checks the counter and if the user hits some threshold than we would fire an incident, which is good, but could fire false-positives, because a normal user could hit this page and maybe try it a couple of times too and while trying to change the password the rule would still fire because the regex would match this URL too. So we thought if there's a way to store the previous page and check the next page to compare if they're different, and if so fires the incident, another thing that needs to happen is that the rule has to compare just the difference in pages like /v1/password/verifyPassword/01234567890 and /v1/password/verifyPassword/98765432109 and not /v1/success or any other pages, they have to be compared just in this kind of pages without false-positives. I would like some insights on how can I achieve that, thanks in advance.
Perhaps you can set a register by IP. That way if a specific IP tries more than some number of CPF values, you can alert on it. Perhaps it is OK for a normal session to encounter say 5 CPF values, then I would 1) filter on a specific URL, 2) Set a register, call it something like CPF_Counter, 3) Increment the value each time it is used and 4) Create another rule that triggers when the CPF value goes over 5.
Hi Jeferson, expanding on Brian's idea using registers...there are a few ways to do this. One is by 'chaining' rules together, like this:
Rule 1: Sets a register on each IP that hits a verifypassord URL. A register records the page value against the IP.
ip.isregister('verifypw_hit1')==0 // this ensures the rule fires only once in the 10 min window
ip > verifypw_hit1 > page > 10 minutes
Rule 2: Checks to see if any of the ips captured in Rule 1 hit a SECOND verifyPassword url in a 10 minute window:
page!=ip.register('verifypw_hit1') && //this compares the URL NOW against the URL set in rule 1...looks for a mismatch
ip > verifypw_hit2 > page > 10 minutes
Rule 3: Checks to see if any of the ips captured in Rule 1 & 2 hit a THIRD verifyPassword url in a 10 minute window:
ip > verifiypw_hit3 > page > 10 min
And with this 3rd rule, you could trigger an alert...this gives you an IP that has hit 3 unique verifypassword urls in roughly 10 minutes. You can keep the chain running if needed.
Another approach is to use a little regex in schema and combine all URLs that begin with "/v1/password/verifyPassword/" into a single URL...so '/v1/password/verifyPassword/12323445 and '/v1/password/verifyPassword/345345345 all simply become ''/v1/password/verifyPassword" in Forensics, for rule-writing, searching, etc. You can then look for any IPs hitting "'/v1/password/verifyPassword" excessively. You can even add the CPF# as an attribute and use in search/rules. Lots of flexibility here to get at what you what.
Jon @ Citi