Expected false positives
I wanted to raise a question that came up in a discussion I had with a customer this week.
For any given rule, what percentage of false positives are expected or acceptable? What is the level of false positives that would lead you to start modifying the rule?
I'm interested in hearing as many takes on this as possible. Is this something you monitor carefully?
I will typically try to fine tune an alert immediately till it produces specifically what I'm looking for. I have found the best way to be as accurate as possible is to filter using the specific message IDs. When using taxonomy to filter by, you will end up with more false positives. Filtering against specific message IDs can be more cumbersome initially when setting up the alert, but in the long run, it will be much more reliable for accuracy.
That's a great point, Steve.
Taxonomy is a great way to get started with correlation, but the best false-positive-proof rules are ultimately going to be built by honing in on very specific criteria, including message IDs.