After a distributed port group is removed, it is possible the MSA may not be able to show the list of networks. The workaround is to restart the MSA service. (DRBC-7063)
Changes to per-network routing tables are not propagated to already deployed appliances. (DRBC-7586)
If the DRVA connection to the replication log disk is disrupted (iSCSI network fail or temporary datastore disconnection), it may lead to protected VMs becoming unresponsive until an internal network timeout is reached. Protected VM unresponsive state is transient. (DRBC-7046)
If the network between the DRVA and object cloud becomes disconnected for a prolonged period, it is possible replication may not be able to restart after network is fixed. To continue replication restart the DRVA appliances manually. (JSDRAVS-111)
If the network between host(s) and the DRVA becomes unstable (establishing connection randomly fails), it is possible the IOFilter daemon may hang. To resume operations migrate VMs from the affected host(s) and restart the host(s). (DRBC-7296)
It is not possible to configure proxy server for stateless appliance external communication. However, MSA can be configured to reach external sites through proxy manually. (DRBC-7794)
If network between hosts and DRVA is momentarily disrupted, it is possible that there could be slowdowns on VMs protected in WB mode. This is related to the default network configuration on the hosts. The condition is temporary and would repair itself after certain time. (DRBC-7082)
Under some conditions MSA would start showing an alert that DRVA is not accessible, even though respective DRVA VMs are online and report IP through VMware tools. The workaround is to restart the MSA service. (DRBC-7950)
If the network communication between MSA and vCenter is unstable during TFO, MSA might not be able to clean up the test VMs after TFO is completed. The workaround is to delete the leftover test VMs manually. (DRBC-8177)