saveBRM on failover runs before the dbroot is exchanged. this could lead to saveBRM being run before the brm_saves_journal file exists on the new primary module on a OAM parent failure and could lead to load_brm hanging.
Reproduce by setting up multi-node glusterfs installation and perform large table import. After import completes kill PM1 and wait for PM2 to take over primary roll will see save_brm command run first then dbroot1 moved to PM2 and then load_brm called in logging.
Fix is to first move dbroot1 then run saveBRM this should allow load_brm to run successfully.