Skip navigation
All Places > Products > RSA Identity Governance & Lifecycle > Blog > Author: Russell Waliszewski

There are 3 things that you can do to help maintain the performance of the database.  There are SQL statements which are influenced by the amount of data in the database.  By removing data that is not actively needed by the system or the business this will help maintain consistent performance with the database.


More info can be found on this webinar, with a video discussion on these topics:

 

Part 1: Data Purging

Introduced in 7.0.1

It removes data that is no longer required for any processing by the system. 

The run will appear in the Monitoring page and will have a log of the information purged. It will also highlight if it was able to complete all the steps of the purge.

 

Action: Confirm this job is scheduled

This can be done by checking:

  • Login as a user with Admin privileges,
  • Go to Admin / System menu
  • Go to the Data Management tab
  • Confirm Purging is enabled, as per the screen shot below

 

Once the job is running confirm that it can complete all steps before hitting the time limit.  It may take multiple runs to complete all steps with older databases.

 

 

 

Part 2: Archive Purge

Introduced in 7.1.0

This is a process that is manually initiated and should be planned by the business to be able restore data in the event of an audit.  It removes objects from the system which has reached an End of Life in the timeframe specified. 

 

An archive run will appear in the Monitoring page and will have a log of the steps required for it to be completed.

 

Once the archive has been successfully completed it will no longer appear in the UI, but it may still be in the database.  The Data Purging process will then start to remove the data during its weekly run.

 

How to apply Archiving:

  • Login as a user with Admin privileges,
  • Go to Admin / System menu
  • Go to the Data Management tab
  • Press the "+Create" button as shown in the image below.

 

 

Part 3: Database Maintenance

Purging of data from the database will result in Oracle tables and indexes to become fragmented.

The Oracle database has a feature called Segment Advisor.  This can be used to schedule a job within the database that will analyze all the Tables/Indexes in the database and identify which ones would benefit most from a rebuild which will coalesce the data.

 

We have provided a simplified access to the tool and is documented in:

  • "Database Setup and Management Guide",
    • Chapter 3. "Maintaining the Database",
      • Section "Database Segment Maintenance".

 

Action: Confirm with the DBA group maintaining the database what activity is needed.

 

For additional information see the blog: Maintaining the database for optimal performance 

More info can be found on this webinar, with a video discussion on these topics:

 

For optimal performance of our database it will require periodic maintenance of database tables and indexes.  RSA not only supports this, but encourages this as it will help maintain acceptable performance levels. 

 

The tables and indexes that need to be maintained is dependent on:

  1. What features of the application are used,
  2. The amount of information maintained,
  3. How often data changes,
  4. How much data changes. 

 

Maintaining of the these tables can be accomplished in three primary ways.


#1: Manual

In this method the Customer, typically DBA staff, will identify tables or indexes that would benefit from maintenance. 

In this scenario the DBA would then specific run commands like, but not limited to:

ALTER TABLE <TABLE NAME> SHRINK SPACE CASCADE;

This will "defragment" the table and potentially free up space used by the table to the tablespace it resides in.

 

ALTER INDEX <INDEX NAME> REBUILD;

This will rebuild the index removing any empty leaf nodes that will occur when data is deleted and updated.

 

#2: Oracle Segment Advisor

The Oracle database has an internal utility called the Segment Advisor which can help reclaim unused space.  

Link here: Managing Space for Schema Objects 

 

Using this utility it will help identify tables and indexes that could benefit from maintenance.  It will also be able to perform the operations as well. 

 

The operations will be the executions of the statements identified in the manual method.

 

RSA Supplied package

The RSA Supplied package is just an interface to the Oracle Segment Advisor as we want to leverage Oracle's expertise as the best way to identify improvements in the database for optimal performance. 

 

Documentation on how to utilize this package can be found in the:

  • "Database Setup and Management Guide",
    • Chapter 3. "Maintaining the Database",
      • Section "Database Segment Maintenance".

 

Regardless of the type of deployment (Appliance, RSA Supplied Database, Remote Database) you have all 3 options are available. 

 

Which one you use should depend on your internal database expertise and comfort level.  Also with any database maintenance it is always best to have scheduled downtime.  This would involve shutting down the RSA Identity Governance and Lifecycle system to avoid any resource contention problems.

One of the strengths of the RSA Identity Governance and Lifecycle offering is the ability to model groupings of user access using our local role management solution. This provides the capability of combining different types of entitlements (access) that are collected from various end points.  These local roles, whether they are defined as business, technical or global roles, allow you to customize the necessary access that is required for different jobs in an organization.  These become the building blocks of access by allowing you to combine various local roles that will fully define the access needed for your user population.

 

A local role is different from a collected role.  A collected role, like other collected information, is access obtained from an endpoint which maintains that information externally from our system.  Like any collected items, a collected global role is suitable to use as an entitlement in a local role.  A potential problem can occur when a user attempts to add a local role as an entitlement to a collected role.  The endpoint that maintains the collected role definition doesn’t know anything about our internal (local) roles – which contain entitlements from a number of other types and sources, including other local roles and local entitlements.  This would make provisioning of the information difficult, if not impossible, and extremely confusing.

 

At this time, there are areas of the product which inadvertently allow the addition of local to collected roles.  This is not intentional, and we plan to remove that capability in future releases.

The log artifact is a new feature in RSA Identity Governance and Lifecycle Version 7.1.1 that will aide in the gathering of log information in the event that a support ticket is created.  It is an administrator-only feature that can gather the most widely used log files when troubleshooting problems in the application, thereby reducing the time to identify the root cause of an issue.  It is designed to work on a node-by-node basis and will work for different architectures such as Hard Appliance, Software Bundle with remote database, and Clusters, along with the different application servers we support.  The page can be found under the Admin > Diagnostics menu.

 

 

 

 

It is located on the Log Artifact tab.

 

The following is a description of the different areas of the page:

 

  1. Node Name - This tells the user which machine the user is current logged into.  This is important because the utility will gather all the log files that are local to that node.  If you are in a clustered environment and need logs from all the nodes in the cluster you will have to log into each one individually and gather from each node.  The information after the ‘:’ is the node type for the machine.  There are two types Systems Operation Node (SON) and General.  Log files should be collected from all nodes if this is a cluster.  At a minimum they should be collected from the SON.
  2. Date Range - This can limit the amount of information that is extracted from the log files.  For instance if the problem is two days old you would want to limit the log file from the last few days.  The default of 30 days will typically be sufficient.
  3. Create File Password (Optional) - This will allow you to password protect the zip file.
  4. Generate Zip File - This button is what will start the gathering of all the logs and then zipping of them into a single file.
  5. Current Status - After initiating the gathering process, this area of the page will display the status of what is being gathered.  It is not refreshed automatically so to see any new status you will need to refresh the page manually.  Once it is complete it will display the file that is available from this node.
  6. Download - This button will be enabled once the gathering/zipping process is complete.  It will allow you to download the most recent process.

 

 

 

When in progress the Current Status will contain the current step.

 

 

Once completed the zip file will be available for download

 

Upon completion of the process the status section will show the pertinent information about what was chosen for this invocation.  As detailed in the picture we will store the zipped file on the server and it will stay there until the next invocation of the process.  To store the file locally just press the Download button.

 

 

Limitations/Restrictions:

  1. When gathering log files for nodes in a cluster the Administrator must log into each node and run the process.  We only gather the local files to the server.
  2. Warning messages from the tool does not indicate a problem with the feature.  Not all environments/configurations will contain the same log files.  The Warning messages are primarily for Engineering.
  3. The gathering of the logs for a remote AFX server is still a manual process.
  4. The gathering of logs for a remote database is also a manual process.  We will display a Warning message with this configuration as the database logs will not be local.
  5. Some log files may reside in different directories depending on a customer’s configuration.  We will make a best attempt to find them with some well-known locations.  If we can’t find them we will display a Warning message to that effect.
  6. The process does not gather an AWR.
  7. The process does not maintain previous ZIP files.  At the start of a new generation the previous file will be removed, this will reduce the chance of running out of disk space.

 

See the following video which demonstrates the use of the feature.

 

Currently we document some minimum database server resource requirements which can be found here: https://community.rsa.com/docs/DOC-86132

 

These are minimum requirements and do not mean that your system will always perform optimally as there are several factors that can contribute to the performance of your system. These factors range from the amount of Applications you are going to monitor, to the Change Requests you create & process, and the number of Certification cycles you go through. These are just an example of some of the factors that determine the resource requirements of the system. We can give you a more accurate sizing guideline with a detailed analysis of the data you will maintain/collect and the features used in the system. For more information contact your local Professional Services group to go through a sizing exercise or Edwin Mullie.


We do not recommend running the database on a virtual machine. If you plan to run the application on a virtual machine make sure you understand the basics of virtualization. Virtual machines share the resources with all the other virtual machines running on the same host server. Depending on your virtual infrastructure a virtual machine may not actually be able to allocate all the resources that it was created with. This overcommitting of resources such as memory will cause severe performance problems with the virtual machine and all the applications that are running on it. The allocation of resources and monitoring of a virtual server requires in depth knowledge so that all virtual machines on that server can service the applications they are hosting efficiently. The requirements for a VM running our application server can be found here: https://community.rsa.com/docs/DOC-86133

Have you had a collection job fail with a status of "Aborted (Circuit Breaker)" and all you did was kick of the collection again with "Ignore Circuit Breaker"?


The Circuit Breaker is there to protect you. In the past customers have had issues with their source data being incorrect and then that incorrect data being pulled into the system. For example, some collections are collected from CSV files that the customer builds with some manual or automatic process from various systems within their organization. If this aggregation process fails, or does not properly create the CSV files, then our collectors will gather the incorrect data and attempt to process it, typically resulting in cases where we think a significant number of objects or relationships have been deleted, which in turn can trigger all kinds of actions. The Circuit Breaker can prevent that. It raises a red flag that a particular collection has too much change, and gives the administrator a chance to review the raw data before we take it on board.


The Circuit Breaker is designed to stop a collection process that exceeds a percentage of change. That change could be something New, Missing or Changed, and it pertains to objects and direct entitlements. Our OOTB percentage change is 5%, but that can be adjusted. We allow the percentage to be changed system wide or it can be changed on a per collector basis as well as on the type of change that occurs. So if you know that your Ldap server does not have a lot of activity you can keep the OOTB percentage. If you have self-service HR system where the users are constantly making modifications to their information you may want to increase the percentage for that collector.

 

In the Collector's Guide there is a section on "Configuring a Data Processing Interruption Threshold" which will give you more information on how to adjust the setting.

Filter Blog

By date: By tag: