OVA 7.1.1 Performance Tuning How Can I Tune for Better Performance?
We are testing the 7.1.1 OVA, collectors are taking 1 hour each to run vs the appliance running one in 10 to 15 mins. The OVA is 32g.
My first question is why is the OVA allocating 65% of the memory to the database? There is not a database on the OVA.
The other question is has anyone set up an OVA that performs as well or better than an r620, the lower end appliance? What did you tune?
Any tuning tips welcome
- Community Thread
- Forum Thread
- Identity G&L
- Identity Governance & Lifecycle
- Installation & Upgrade
- RSA Identity
- RSA Identity G&L
- RSA Identity Governance & Lifecycle
- RSA Identity Governance and Lifecycle
- RSA IGL
Well there are a lot of variables here in this comparison. I would recommend you go over the following points 1 by 1 to verify which ones are different.
- The sizing of both the application server OVA and its remote database vs what you had on the HW appliance. Here are some suggestions you can try:
- To find the number of CPUs you can use: nproc -all
- To find the memory you can use: free -m
- How the Hypervisor used is allocating resources to the VM, since in most cases Hypervisors tend to share underlying resources between VMs and not dedicate the actual full resources to your VM.
- The memory settings of both application server (Heap and MetaSpace) and database (SGA, PGA, Memory Target).
- To find the database values you could run this as avuser. I would expect memory_target and memory_max_target to be 0 since we do not recommend Oracle automatically managing its memory, then compare the rest of the values.
select name, value from gv$parameter WHERE name in ('pga_aggregate_target','pga_aggregate_limit','sga_target','sga_max_size','cpu_count','memory_max_target','memory_target');
- To find the heap sizes you can try something like: ps -ef | grep wildfly | grep -e Xmx -e Xms -e MaxMetaspaceSize
- Which parts of the collectors are taking longer than usual (The collection task would point towards the application server itself, while the (pre|post) processing tasks point towards the database.
- Sounds like your HW appliance has been running for quite some time while your OVA is new. The first collection run is always going to be much slower than the subsequent runs due to how our collectors work by only processing delta changes. I would suggest you run the same collectors on the OVA at least 2 times to get a valid comparison.
To answer your question about the memory though, our aveksa_server startup script checks if a /u01 directory exists to determine if Oracle is installed, specifically in the below lines:
# 65% to Oracle if exists
[ -d /u01 ] && ORACLE_MEM=$(($APP_MEM * 65 / 100)) || ORACLE_MEM=0
I haven't installed the OVA myself but check if you have a /u01 directory. If it exists on your OVA try deleting the directory completely then restarting the services to see if that makes a difference. I would also say to log this as a case with RSA Support so they can make sure this is not a problem with our OVA.
Hmm that would be slightly different then. This steps is only updating the relevant Oracle database table statistics after the collection, if this takes a long time I would involve a DBA to see why statistics gathering is taking too long.
Also you can generate an ASR from the UI and have a look at the internal table sizes (ref: 000030327 - Artifacts to gather in RSA Identity Governance & Lifecycle). This can give you an idea on the large tables where you can then see if they require purging (New Feature: Database Purge) or if you can benefit from the new data archiving features (New Feature: Data Archiving).