Data Privacy: Configure Data Retention

Document created by RSA Information Design and Development on Sep 18, 2017Last modified by RSA Information Design and Development on Aug 1, 2019
Version 9Show Document
  • View in full screen mode
 

A NetWitness Platform user with the role of Administrator can configure NetWitness Platform to ensure that sensitive data has been removed after a specific retention period, regardless of system ingest rate. For instance, the policy might be to keep packets (both raw data and metadata) for no more than 24 hours, and to keep some logs (both raw data and metadata) for up to seven days. If sensitive data makes its way into another database on the Reporting Engine, Malware Analysis, Event Stream Analysis, and NetWitness Servers, data retention can be managed there as well. The administrator needs to set up each service individually across all NetWitness Platform components (except Event Stream Analysis) based on policy and data privacy laws.

Sensitive data may also be in cache.

  • Brokers can cache data and this needs to be cleared by configuring an independent rollover and other removal of cache as required. The administrator can configure cache rollover for a Broker by editing the Scheduler file in the Services Config view Files tab.
  • Investigate and the NetWitness Server cache data, and this is cleared automatically every 24 hours.
  • If the Data Privacy Officer (DPO) exports data, that is the same as saving data on the NetWitness Server in the jobs queue. To clear this data, the administrator or DPO should clean up the jobs queue on a regular basis.

Data Retention

You can schedule a recurring job for Decoder, Log Decoder, and Concentrator services in NetWitness Platform to check if data is ready to be removed. The Data Retention Scheduler provides a means to configure basic scheduling (see below), and advanced Scheduler settings are also available by editing the Scheduler file in the Services Config view Files tab or the node in the Explorer view.

The Archiver has flexible data storage and retention options. You can place different types of log data into individual collections and manage them separately. These collections enable you to specify how much of the total storage space to use and how many days to store the logs in the collection. You can also determine whether to delete the log data or to move it to offline cold storage after it reaches the maximum specified storage space for the collection.

For example, you can put sensitive information in a collection and configure a limitation on how long to keep it, such as 30 days. To delete the data after 30 days, you would not enable warm or cold storage for that collection. 

Deleting versus Retaining Log Data

Administrators can configure hot, warm, and cold tiered storage on an Archiver. Cold storage contains the oldest log data that is either required for the operation of the business or mandated by regulatory requirements. When a collection reaches its retention limits for hot and warm storage, NetWitness Platform deletes the log data from hot or warm storage. With cold storage configured, a copy goes into cold storage before the logs are deleted from hot or warm storage. You can choose whether to enable cold storage for each log storage collection. NetWitness Platform does not manage cold storage. 

Enable or Disable Cold Storage in a Log Storage Collection

When log data in a collection reaches its retention limits for hot and warm storage, you can delete it or move it to offline (cold) storage. 

To enable or disable cold storage in a log retention storage collection on an Archiver:

  1. Go to ADMIN > Services.
  2. Select the Archiver service and select Actions icon > View > Config.
  3. Click the Data Retention tab.

    Archiver Data Retention tab

  4. In the Collections section of the Data Retention tab, select a collection and click Edit icon.

    The Collection dialog is displayed.

    Collection dialog

    Note: If the maximum storage size of the collection does not allow full data retention for the retention period specified, NetWitness Platform deletes the data or it goes to warm or cold storage if specified in the collection.

  5. Enable or disable cold storage:

    • To delete log data when the collection reaches its specified retention limits, clear the Cold Storage checkbox.
    • To move log data to offline storage when the collection reaches its specified retention limits, select the Cold Storage checkbox. 
  6. Click Save.

Configure Log Retention and Storage on an Archiver

To configure log retention and storage on an Archiver, see "Configure Archiver Storage and Log Retention" in the Archiver Configuration Guide.

Schedule a Recurring Job to Check Data Retention Thresholds

The data retention scheduler configuration ensures that the data residing in the Decoder, Log Decoder, and Concentrator components is deleted after a certain time. For example, data retention on a Decoder might be configured to execute a check every 15 minutes to determine if the specified duration threshold has been met. If the threshold is met, NetWitness Platform deletes data older then 4 hours in the relevant databases.

Caution: The schedule overwrites any previous schedule and becomes effective immediately. If the retention period is decreased, the data exceeding this retention period is removed.

For a  Decoder, Log Decoder, or Concentrator:

  1. Go to ADMIN > Services.
  2. In the Services view, select a Decoder, Log Decoder, or Concentrator service and select Actions icon > View > Config.
  3. Click the Data Retention Scheduler tab.

    Decoder Data Retention Scheduler tab

  4. Set the threshold based on the duration of time the data has been stored or the date on which the data was stored. Do one of the following:

    1. To define the duration of time that data can be stored before removal, select Duration, and then specify the number of days (365 maximum), hours (24 maximum), and minutes (60 maximum) that have elapsed since the time stamp on the data.
    2. To define the removal of data based on the date of the timestamp, select Date, and then specify the monthly date and time in the Calendar and Time fields.
  5. Do one of the following to configure the schedule for checking rollover criteria:

    1. If you want to set a regular interval at which the scheduled database check occurs, select Interval and specify the Hours and Minutes between the scheduled checks.

    2. If you want to set a regular date and time at which the scheduled database check occurs, select Date and Time and specify the system clock time in hh:mm:ss format for the rollover.

      • To specify the day, select Every Day, Weekdays, or Weekends. The Scheduler defaults to Every Day.

      • To specify a different set of days of the week, select Custom and click on each day on which the database check occurs.

        Caution: The schedule overwrites any previous schedule and becomes effective immediately. If the retention period is decreased, the data exceeding this retention period is removed.

  6. Click Apply to complete the configuration.

Purge Data Using String and Pattern Redaction Option

If you have data where you do not want it to be (for example, you have classified data in your unclassified Network Decoder), purging data using string and pattern redaction is the fastest way to wipe data. It is ideal for smaller amounts of data. This method requires that you identify the data to be purged by using a query in either the Investigate view, the REST API interface, or the NwConsole. Using the Investigate view requires additional steps to clear user interface cache on the admin server. This procedure includes the extra steps needed when you find the data to be purged using the Investigate view.

Note: If a large amount of data needs to be purged, it is best to start with the data on the storage component. Refer to Optional Data Overwriting Options for additional information.

To purge the data: 

  1. Go to INVESTIGATE > Event Analysis or INVESTIGATE > Events and find the events that you want to purge. In the Event Analysis view you can see the session ID in the reconstruction header. You need to scroll down in the Meta panel to see the remote ID.
    Example of an event in the Event Analysis view
    In the Events view, both IDs are visible in the Events list.
    Example of the Events view
  2. Make a note of the session ID and the remote ID of each event to be deleted. In the example above, the session IDs are 27974 and 27975. The remote IDs are 6081 and 6082.
  3. Starting with the Concentrator, view the service using the REST API. Refer to the RESTful API User Guide for instructions. Go to the Master Table of Contents to find all NetWitness Platform Logs & Network 11.x documents.
    In the REST API, enter the wipe command with the following parameters:
    • session=<uint64>: The session ID whose packets will be wiped.
    • payloadOnly=<bool, optional>: If payloadOnly is true, only the packet payload will be overwritten. The default value is true.

    • pattern=<string, optional>: The pattern uses all zeros by default. If you use a string as your pattern, it will not overwrite any meta values that are not a string type. Therefore, it is best to keep the pattern as a numerical value.

    • metaList=<string, optional>: Comma-separated list of metadata to wipe. The default (empty) is all metadata.

    • source=<string, optional, {enum-any:m|p}>: The types of data to wipe: metadata, packets, or both. The default is packets only. The best option here is to wipe both metadata and packets (m,p).
      Example of the Concentrator wipe command

  4. For each Network Decoder and Log Decoder in the path of the query, use the wipe command and the remote session IDs to overwrite the raw sessions on disk.
    Example of the Decoder wipe command
  5. To ensure that the indexed meta values that were stored on the Concentrator index are removed, rebuild the index. This can take a long time but is necessary because the wipe command does not remove any data from the Concentrator index. Refer to the Core Database Tuning Guide for instructions.
  6. (Optional) If the data was discovered using the Investigate view, additional steps are necessary:
    1. To clear the query path to prevent an analyst from bringing up an old copy of the session, go to: AdminSystemInvestigation and select Clear Cache for All Services.
      Example of the clear cache for all services option
    2. To clear the cache for Event Analysis reconstruction, restart the Investigate service.
      Restart Investigate service
    3. To clear the cache for the Concentrator and Network Decoder, go to ADMINServicesConcentrator > right-click sdkProperties and ADMINServicesDecoder > right-click sdkProperties and execute the delCache command. The figure below shows this path for the Concentrator.
      Execution of the delCache command
      An analyst attempting to view the same session in Investigate will see that the raw data is unavailable for viewing and the metadata has been overwritten.

You are here
Table of Contents > In-Depth Procedures > Configure Data Retention

Attachments

    Outcomes