Database Restore Procedures?
I'm interested in the best practise for backing out of an upgrade that's been done via packaging. My idea would be to back up the instance database - and then install the package. The backout would be to restore the database. Clearly this requires that no users are active on the system during the upgrade and testing.
I suspect that there may be some gotchas with the restore. Is there any documentation about this process on this site - or can anyone share any experiences?
There isn't any real safe roll back when it comes to packaging and there is no documentation on this scenario being packaging shouldn't put the application in such a state that a database restore is needed.
With that said.
Backing up the database prior and restoring if installing the package goes wrong for some strange reason and might cause problems with data being restored to a prior state. This is specially sensitive in a production environment. There are jobs that run in the background like calculations and notifications then you have scheduled jobs like data feeds. Lastly is indexing you'll have to do a full index rebuild so there is no search corruption.
Ideally you would have to the following prior to backing up the database;
- Restrict all users from accessing the framework.
- If 3rd party applications are updating data via API that would have to be stopped
- All data feeds need to be set to inactive
- Stop all archer services
The above would have to stay in place until you're done checking out the installed application; in case you have to restore the database.
With all the packaging I've done. I have yet to see a packing install hosing an application to a point where the only recovery was a database restore.
Thanks for your helpful reply. My concern is that the business want a guaranteed backout plan within defined timescales. I hear what you're saying about packaging problems never requiring database restore. However if we have problems post install (and these may not be Archer problems) then the clock is ticking to get the solution up and running. Ad-hoc tweaks may well solve the problems but management won't accept that as a backout plan.
Could a viable solution be just to have a back up package of the application(s) & related add ons that are being upgraded, within the environment to which you are applying the upgrade? Or to take it a step further, install that packaged back up in another environment.
There will then be something a little more tangible that may sate the desires of mgnt.
I have encountered similar insistences by mngt about data base back ups. But as David said, this route would most likely cause more problems & hassle.
There definitely no guarantees in software development/roll-out of any product
Like I mentioned in my initial post; there isn't any documentation to roll back a package install if it goes bad and a database restore would have to be done sans contacting support to see if they can work out the issues before doing a restore.
With restoring the database there is no guarantees that data will be the same as it was prior to the backup or there is data missing or something even worse. This could be catastrophic in production. An application going bad via packaging is better then data loss or corruption.
Worse case scenario is that you do application migration the old 4.x way and manually create/update all objects in the application(s), reports, workspaces/dashboards, roles, groups, etc. This would drastically increase deployment time and prone to the same issues or even worse than if deploying via packaging.
To reduce problems with packaging it's best that all your environments are pretty much the same. There are some companies that periodically restore the production environment back to QA/UAT and Dev to make sure all the environments are pretty close to being the same being there are times that changes are made in production and are never made in the lesser environments.
Thanks for the reply. I did think about making a copy of the solution in the prod environment. The trouble is that this leaves traces in the cross referenced application and Vendor Management has lots of cross references.
I’ve already ported the application to another environment and set up an Archer to Archer datafeed to bring across the data just before upgrading. This way I have a copy of the old solution. Whilst this is useful it doesn’t help me restore prod if it goes badly!
I agree that packaging is so much better than doing it by hand – it’s come a long way.
One of my concerns is failed mappings producing field names with a (1) after them. This presumably happens when field names are the same but their IDs are different. I’d have thought that a field would be mapped even if the ID was different as long as the name was the same. Does the alias come into play? Can you explain the matching logic, or point me to some documentation. I’ve read the user guide but it doesn’t go into much detail.
Not sure on the the sequence for mapping fields. I think it looks for a matching QUID first then by name or alias. Support would be able to shed more light on this and what causes it to append (n) to the field name.
When you install a package, the system matches components using their GUIDs. Any component from the source that has a GUID that doesn't match one from the target environment is considered a new component and will be created in the target environment. If the name already exists in the target environment, the new item has to have a unique name and will have (1), (2) , (3), etc appended to it.
The problem often occurs in a values list. You may add a new value to a field in production and add the same value to the same field in development. Even though the two names are the same, they have different GUIDs and are seen as different values during package installation.
In version 5.2, Advanced Package Mapping was introduced to help deal with this issue. After the components are mapped based on GUIDs, you can see all the unmapped components and manually map them to the same component in the target environment. There is an option to automattically map the unmapped components based on names to simplify and shorten the mapping process.