Skip navigation
All Places > Products > RSA Identity Governance & Lifecycle > Blog > Authors Russell Waliszewski

The log artifact is a new feature in RSA Identity Governance and Lifecycle Version 7.1.1 that will aide in the gathering of log information in the event that a support ticket is created.  It is an administrator-only feature that can gather the most widely used log files when troubleshooting problems in the application, thereby reducing the time to identify the root cause of an issue.  It is designed to work on a node-by-node basis and will work for different architectures such as Hard Appliance, Software Bundle with remote database, and Clusters, along with the different application servers we support.  The page can be found under the Admin > Diagnostics menu.





It is located on the Log Artifact tab.


The following is a description of the different areas of the page:


  1. Node Name - This tells the user which machine the user is current logged into.  This is important because the utility will gather all the log files that are local to that node.  If you are in a clustered environment and need logs from all the nodes in the cluster you will have to log into each one individually and gather from each node.  The information after the ‘:’ is the node type for the machine.  There are two types Systems Operation Node (SON) and General.  Log files should be collected from all nodes if this is a cluster.  At a minimum they should be collected from the SON.
  2. Date Range - This can limit the amount of information that is extracted from the log files.  For instance if the problem is two days old you would want to limit the log file from the last few days.  The default of 30 days will typically be sufficient.
  3. Create File Password (Optional) - This will allow you to password protect the zip file.
  4. Generate Zip File - This button is what will start the gathering of all the logs and then zipping of them into a single file.
  5. Current Status - After initiating the gathering process, this area of the page will display the status of what is being gathered.  It is not refreshed automatically so to see any new status you will need to refresh the page manually.  Once it is complete it will display the file that is available from this node.
  6. Download - This button will be enabled once the gathering/zipping process is complete.  It will allow you to download the most recent process.




When in progress the Current Status will contain the current step.



Once completed the zip file will be available for download


Upon completion of the process the status section will show the pertinent information about what was chosen for this invocation.  As detailed in the picture we will store the zipped file on the server and it will stay there until the next invocation of the process.  To store the file locally just press the Download button.




  1. When gathering log files for nodes in a cluster the Administrator must log into each node and run the process.  We only gather the local files to the server.
  2. Warning messages from the tool does not indicate a problem with the feature.  Not all environments/configurations will contain the same log files.  The Warning messages are primarily for Engineering.
  3. The gathering of the logs for a remote AFX server is still a manual process.
  4. The gathering of logs for a remote database is also a manual process.  We will display a Warning message with this configuration as the database logs will not be local.
  5. Some log files may reside in different directories depending on a customer’s configuration.  We will make a best attempt to find them with some well-known locations.  If we can’t find them we will display a Warning message to that effect.
  6. The process does not gather an AWR.
  7. The process does not maintain previous ZIP files.  At the start of a new generation the previous file will be removed, this will reduce the chance of running out of disk space.


See the following video which demonstrates the use of the feature.


Currently we document some minimum database server resource requirements which can be found here:


These are minimum requirements and do not mean that your system will always perform optimally as there are several factors that can contribute to the performance of your system. These factors range from the amount of Applications you are going to monitor, to the Change Requests you create & process, and the number of Certification cycles you go through. These are just an example of some of the factors that determine the resource requirements of the system. We can give you a more accurate sizing guideline with a detailed analysis of the data you will maintain/collect and the features used in the system. For more information contact your local Professional Services group to go through a sizing exercise or Edwin Mullie.

We do not recommend running the database on a virtual machine. If you plan to run the application on a virtual machine make sure you understand the basics of virtualization. Virtual machines share the resources with all the other virtual machines running on the same host server. Depending on your virtual infrastructure a virtual machine may not actually be able to allocate all the resources that it was created with. This overcommitting of resources such as memory will cause severe performance problems with the virtual machine and all the applications that are running on it. The allocation of resources and monitoring of a virtual server requires in depth knowledge so that all virtual machines on that server can service the applications they are hosting efficiently. The requirements for a VM running our application server can be found here:

Have you had a collection job fail with a status of "Aborted (Circuit Breaker)" and all you did was kick of the collection again with "Ignore Circuit Breaker"?

The Circuit Breaker is there to protect you. In the past customers have had issues with their source data being incorrect and then that incorrect data being pulled into the system. For example, some collections are collected from CSV files that the customer builds with some manual or automatic process from various systems within their organization. If this aggregation process fails, or does not properly create the CSV files, then our collectors will gather the incorrect data and attempt to process it, typically resulting in cases where we think a significant number of objects or relationships have been deleted, which in turn can trigger all kinds of actions. The Circuit Breaker can prevent that. It raises a red flag that a particular collection has too much change, and gives the administrator a chance to review the raw data before we take it on board.

The Circuit Breaker is designed to stop a collection process that exceeds a percentage of change. That change could be something New, Missing or Changed, and it pertains to objects and direct entitlements. Our OOTB percentage change is 5%, but that can be adjusted. We allow the percentage to be changed system wide or it can be changed on a per collector basis as well as on the type of change that occurs. So if you know that your Ldap server does not have a lot of activity you can keep the OOTB percentage. If you have self-service HR system where the users are constantly making modifications to their information you may want to increase the percentage for that collector.


In the Collector's Guide there is a section on "Configuring a Data Processing Interruption Threshold" which will give you more information on how to adjust the setting.