SDK Configuration Nodes

This topic describes the SDK configuration nodes that affect the database. There are some additional configuration items in each Core service that affect the database, but do not actually affect how the database stores or retrieves data. These settings exist in the /sdk/config folder.


This setting controls how many query operations are allowed on the database simultaneously. Allowing more simultaneous query operations can improve overall responsiveness for more users, but if the query load of the Core service is very I/O bound, having a high max.concurrent.queries value can have a detrimental effect. The recommended value is near the number of cores on the system, including hyper threading. Thus, for an appliance with 16 cores, the value should be somewhere close to 32\. Subtract a few for aggregation threads and general system response threads. Subtract a few more if this is a hybrid system (for example, both a Decoder and Concentrator running on the same appliance). There is no magic number, but somewhere between 16 and 32 should work well.


This setting of the core services is to limit the usage of RAM per query. It would stop the queries from executing if they require more than the configured value. By default, based on the appliance, it would be around ~5.71 GB. This particular setting should be changed if the queries require more memory for completion. Basically, your query requires more memory than the configured value to populate the results completely. Queries won't return partial results, if they hit the memory and they will be auto-cancelled by the system.

Configuration and Limitation

There are some limitations of this query, therefore it must be configured correctly, since there is no particular limit to which this configuration can be changed. Hence, it is recommended to carefully change the setting.

For example, there is another setting called as "maximum concurrent queries" - which deals in simultaneously executing the number of query calls. If the number is 11 and if we have 14 calls, 11 would be in 'executing' state and 3 would be in 'pending' state. Every query of that executing 11 can occupy the memory which is configured in "max.query.memory". Now, if you increase max.query.memory, you are giving permission to the service to allocate more memory for the query if required.

Conditions and Checks

For example:

  • The Total Memory in the Machine is 120 GB.
  • Ideal State Memory Usage is X GB (Memory used by OS when services are running, no queries are being made, no reports are running, and no investigation is being done).
  • Maximum Available Memory for the Investigation Operations would be ( 120{Total Memory} -X ) GB.

Now, If you want to increase the max.query.memory from 5.71 to "N" GB, you must make sure that following condition is met:

"N" * max.concurrent.queries < (120-X-10) GB

Or if you are increasing the max.query.memory, you must check the above condition. If there is close gap, you should reduce the max.concurrent.queries from 11 to a smaller probably to 6 or 7. Else there are chances of hitting Out-Of-Memory error, eventually crashing the service. It is recommended to have 10 GB accounted for other operations.


This setting controls the backlog size for the query engine of the database. Larger values allow the database to queue more operations for execution. A queued query does not make progress on its execution, so it may be more useful to make the system produce errors when the queue is full, rather than allowing the queue to grow very large. However, on a system that is primarily performing batch operations such as reports, there may be no detrimental effect to having a large queue.


This setting controls a feature of the query engine that is intended to improve query responsiveness when there are a large number of simultaneous users. For more information on cache window, see Optimization Techniques .


The where clause cache controls how much memory can be consumed by query operations that need to produce a large temporary data set to evaluate sorting or counting. If the where clause cache size is overflowed, the query still works, but it is much slower. If the where clause cache is too large, it is possible for queries to allocate so much memory that the service would be forced into swap or run out of memory. Thus, this value multiplied by the max.concurrent.queries should always be much less than the size of physical RAM. This setting understands sizes in the form of a number followed by a unit, for example 1.5 GB .


The maximum unique values limits how much memory can be consumed by the SDK Values function. SDK Values produces a sorted list of unique values. In order to produce accurate results, it may need to merge together large numbers of unique values from many slices. This merged set of values must be held in memory, so this parameter exists to put a limit on how much memory the merged value set can consume. The default value will limit memory usage to approximately 1/10th of total RAM.

query.level.1.minutes , query.level.2.minutes , query.level.3.minutes

These settings are available in 10.4 and earlier versions.

In versions 10.4 and earlier, the Core database supports three query priority levels. Each user is assigned to one of the priority levels. Therefore, there are up to three groups of users that can be defined for the purposes of performance tuning. These settings control how long each user level is allowed to execute the queries. For example, lower privileged users may have a lower value so that they are not able to use all the resources of the Core service with long-running queries.


This setting is available in 10.5 and later versions.

Query levels have been replaced in versions 10.5 and later with per user account query timeouts. For trusted connections, these timeouts are configured on the NetWitness Platform server. For accounts on Core services, there is a new config node under each account called query.timeout , which is the maximum amount of time in minutes that each query can run. Setting this value to zero means no query timeout will be enforced by the Core service.


This setting will be deprecated in a future release. Use max.query.memory to limit overall query memory usage.

This setting imposes a limit on how many sessions can be scanned by a single query. For example, if a user selects all meta from the database, the database stops processing results once the number of sessions read for the query reaches this configuration value. The value of 0 disables this limit.

The number of sessions needed to fully process a query is equal to the number of sessions that match the WHERE clause of the query, assuming that all terms in the where clause have a suitable index. If there are terms in the where clause that are not indexed, the database has to read more sessions and meta, and reaches this limit sooner.


This setting will be deprecated in a future release. Use max.query.memory to limit overall query memory usage.

This setting imposes a limit on the number of unique groups collected in a single query. For example, if a query has a group by clause with multiple metas that have high unique value counts, the amount of memory needed for that query could easily outpace the amount of RAM available on the server. Thus, this limit exists to prevent out-of-memory conditions from happening.

Setting a value of 0 disables this limit.

This is a decoder-only setting that affects the access to the packets database. When is set to a value greater than 0, the decoder attempts to throttle packet reads when it detects packet contention on the packet database. Higher numbers provide more throttling. Changes takes effect immediately.

cache.dir , cache.size

All NetWitness Platform Core services maintain a small file cache of raw content extracted from the device. These parameters control the location (cache.dir) and size (cache.size) of this cache.


This setting is available in 10.5 and later versions.

This setting allows SDK-values operations to be executed in parallel. If this is set to 0, it will disable parallel execution. If it is set to a value greater than 0, it represents the number of threads created when each SDK-values operation is executed. The maximum value is the number of logical CPUs available when the process started.

Setting a higher value for parallel.values is useful when there are small numbers of simultaneous users, since it will allow for more complex Investigations to be executed more quickly. If there are many simultaneous users, it is better to use a low value here, since there will be many independent SDK-values operations executed simultaneously.


This setting is available in 10.5 and later versions.

This configuration is similar to the parallel.values setting in that the maximum value is the number of logical CPUs. Setting parallel.query to a specific value should take into account the number of simultaneous users to maximize CPU utilization without consistently exceeding available resources.

Setting a higher value for parallel.query is useful when there are small numbers of simultaneous users and queries, since it will allow more complex queries to be executed more quickly. If there are many simultaneous users and queries, it is better to use a low value, since there will be many independent SDK-query operations executed simultaneously.

Query operations are limited by the meta database read rate, so setting parallel.query to a value higher than 4 is unlikely to produce dramatically better results than the default value of 0\. The best number to use for parallel.query will depend on the type of storage attached. Experiment with different values of parallel.query to determine the best results for your storage system.