Davide Veneziano

Customizing the platform: generate a hash for each log message

Discussion created by Davide Veneziano Employee on Sep 1, 2014
Latest reply on Sep 4, 2014 by Davide Veneziano

With this article I want to follow up the discuss on customizing the platform to achieve complex use cases started a couple of weeks ago and specifically how to enhance the log parsing mechanism by leveraging parsers commonly used to analyze a network stream, which are usually more flexible and powerful.

There are already in other posts a few examples on how to leverage Flex or Lua parsers to post-process meta values generated by log parsers but the approach below is more comprehensive since it applies to the entire raw stream and can address different situations.

Keep in mind that when a packet parser (Flex or Lua) is deployed in a Log Decoder, it has full visibility over the network stream as if running on a Packet Decoder. Since logs are received by the Log Decoder via syslog messages, handling this traffic for the parser is like analyzing a syslog stream.


This implies ad-hoc parsers can be written to overcome whatever limitation the legacy Envision parsing mechanism could have. Once deployed, the packet parser will show up in the Parsers configuration of the Log Decoder like any other (network) parser and can be used to generate meta that will be added on top of what the log parser is already producing.

This means a LUA parser can be used not only to post-process existing meta generated by a log parsers, but, by accessing the entire raw stream, also to apply a complete custom logic to it to potentially achieve different complex use cases. Just to name a few:

  • Handle ESI unsupported delimiter (e.g. <tab>)
  • Split and post-process strings
  • Better identification of the event time
  • Generating multiple values for the same meta key
  • Overcoming the limitation in length of a single meta by splitting its value across multiple keys

To prove the concept, I wrote the attached parser which is generating a CRC32 of the entire log message and storing it in the crypto meta key (please note, it is not a good idea to store unique values in an indexed key). This is a common use case when there is a requirement of achieving an integrity check per event and not per database file to meet a specific compliance requirements. Many other complex use cases can be achieved by applying a similar approach.

To deploy the parser, upload the two files in attachment to the /etc/netwitness/ng/parsers directory and reload the parsers from the Explore view (/decoder/parsers reload).

Disclaimer: please DO NOT consider what is described and attached to this post as RSA official content. As any other unofficial material, it has to be tested in a controlled environment first and impacts have to be evaluated carefully before being promoted in production. Also note the content I publish is usually intended to prove a concept rather than being fully working in any environments. As such, handle it with care.