Factors for increasing session.split meta value
Is the session.split meta key increased by hitting the max of Assembler Maximum Size, or is it increased by hitting the max of Capture Buffer Size? What is the difference between the two of them, and should they both be set to the same value?
- capture buffer size
- Community Thread
- Forum Thread
- Packet Decoder
- RSA NetWitness
- RSA NetWitness Platform
From my test, the session.split was based on assembler maximum size. I changed the setting in my decoder to 64mb and restarted the decoder service.
After restarting the decoder, I went to a system in my network and downloaded a 512mb file.
Session.split was used and because the max size was set to 64mb, the session.split meta was handled accordingly. 64mb size x 8 splits = 512mb.
Not sure there needs to be a change to the Capture Buffer Size, but am checking into that.
Thanks, Chris. I happened to notice the same behavior in my tests, as well. What sparked the question was the new Live Content for "Outbound Session Greater Than 1GB" App Rule. The description discussed how the capture buffer size was what incremented the session.split meta value.
By default, the Decoders capture buffer size is 32MB. If you have modified this setting, then you may need to tune the condition for session.split within the rule. Once a session exceeds the capture buffer size, it is split into a separate session. Meta is generated on each split session called, session.split, which is incremented by a count of 1 with each new session. The default setting for the Decoder capture buffer size may be found through the NetWitness Suite UI > Administration > Services > Explore > decoder > config > capture.buffer.size.