What is the command to see how much your drives are filled up in the volume? For example, we have 7 SSDs in our Log Concentrator and we just expanded to another DAC by extending the LVM /var/netwitness/concentrator/index volume. I'd like to know if the new drives are being written to and how much every drive in the RAID5 pool is saturated. I can't seem to find the right command, loping at physical drive info just says 186GB (SSD) but nothing about how full the actual drives are.
Also, has anyone extended Concentrator DACs before and seen little to no performance gains? I'm curious if this is due to the original 7 SSDs being saturated and only the new 7 SSDs are being written to and the FIFO'ing of the data doesn't occur until much later since there is no free space to write new index data to. Ultimately, our Last Hour or Last 24 Hour queries always hit the 7 SSDs and do not write distributed to 14 SSDs across the 2 Concentrator DACs. I'm trying to figure out a way to get even distribution without an index reset.
- Community Thread
- Forum Thread
- RSA NetWitness
- RSA NetWitness Platform
I am pretty sure the nwmakearray script is what you are looking for. I believe it should autoexpand everything correctly, and it will actually expand the index by creating a new LVM I believe. Normally if you are extending to another DAC is is index0, then it should write every other file to each device.
You're not going to find that information by querying the MegaRAID. Note it's not up the RAID controller how the blocks are allocated-- the RAID controller has no knowledge of whether blocks are used/unused by the filesystem.
If you extend a filesystem by growing an existing LV, the new blocks are linearly appended to the blocks already allocated for the filesystem.
Thus, the XFS filesystem now sees a large block device with the first half mapped to one RAID volume, and the second half mapped to the other RAID volume. It is completely at the discretion of XFS to determine how the blocks are used across the volume, but generally speaking XFS will not move any data in existing blocks, so it's very likely it will only write new data to the new available blocks on the new RAID device.
To get the desire behavior of round-robin writes across filesystems, you have two options:
1. As Sean suggested: rebuild the index as two separate LVs.
2. If you want to keep the entire index on a single filesystem, you can rebuild the index LV with a non-contiguous allocation policy and get RAID-0 style behavior from LVM.
Regarding your existing index data--
There's no way to reorganize the large, monolithic index filesystem without deleting the existing filesystem.
To avoid an index reset, you can move your index data off to the metadb filesystem, make the desired changes, and then move the index data back into place. Since you just added the DAC, the metadb filesystem will have more than enough free space.
I highly recommend the approach of having a separate filesystem per index RAID device. It gives the concentrator the most information about how to distribute the index data across the hardware.
Adding on to what Sean said you may also want to take a look at this guide. It describes how to properly add an additional DAC to an existing appliance.
If you are having continued problems, please contact support.
Robert, older versions of the nwmakearray script created a large monolithic index volume, much like what nkasu has reported. This was required by version SA 10.2 and earlier, which did not support indexing on multiple directories.
Thus, the guide you linked to needs to be updated to reflect the new ability to spread the index out across multiple filesystems, which is possible (and recommended) since SA 10.3.