bridge: add guest-side reconnect loop for live migration#2698
bridge: add guest-side reconnect loop for live migration#2698shreyanshjain7174 wants to merge 1 commit intomicrosoft:mainfrom
Conversation
| } | ||
| const commandPort uint32 = 0x40000000 | ||
|
|
||
| // Reconnect loop: on each iteration we create a fresh bridge+mux, dial the |
There was a problem hiding this comment.
In general, an exponential backoff is the right answer. But in this case, the VM is frozen in time, and only wakes up when the host shim is ready. The connection should be immediately available. I think I'd rather see this a very tight loop personally
There was a problem hiding this comment.
Agreed — the VM is frozen and wakes up with the host ready, so the vsock should be available right away. I'll switch to a tight fixed-interval retry (e.g. 100ms) instead of exponential backoff.
| logrus.Info("bridge connected, serving") | ||
| bo.Reset() | ||
|
|
||
| serveErr := b.ListenAndServe(bridgeIn, bridgeOut) |
There was a problem hiding this comment.
Why cant you just reset the isQuitPending and call ListenAndServe again? Wouldnt that "just work"?
There was a problem hiding this comment.
It almost works, but there's a subtle issue with handler goroutines. The handler dispatch at line 356 spawns go func(r *Request) { ... b.responseChan <- br }(req) — this goroutine captures b and sends to b.responseChan, which is a struct field. If a handler is still in-flight when ListenAndServe returns (say a slow CreateContainer or ExecProcess), and we call ListenAndServe again on the same bridge, the new call overwrites b.responseChan = make(chan ...) while the old handler is about to send to it. That's a data race on the struct field — the old goroutine reads b.responseChan concurrently with the new ListenAndServe writing it.
In practice this window is very small (handlers finish fast), so it wouldn't show up in normal LM testing. But under load — say a CreateContainer request arrives right as the vsock drops during migration — the handler goroutine could be mid-flight when we re-enter ListenAndServe.
Recreating Bridge means the old handlers hold a reference to the old (now-dead) bridge with its own channels, and the new bridge has completely separate state. No shared mutable field.
That said, if you think the simplicity of reuse outweighs this edge case, we could make it work by not closing responseChan in the defers and adding a short drain period before re-entering. Happy to go either way.
There was a problem hiding this comment.
You're right — I looked at the host side and no mutating RPCs (CreateContainer, ExecProcess, etc.) are in-flight when migration starts. The only long-lived handler goroutine during migration is waitOnProcessV2, which is blocked on select { case exitCode := <-exitCodeChan } — it doesn't touch responseChan until the process actually exits, and by then the notification goes through Publisher.
Simplified to reuse the same Bridge. ListenAndServe already creates fresh channels on each call, so re-entering it on the same struct works. Also switched from exponential backoff to a tight 100ms retry as discussed. Pushed and tested with LM — both nodes 100%.
dbc66f1 to
05c7170
Compare
During live migration the vsock connection between the host and the GCS breaks when the VM moves to the destination node. The GCS bridge drops and cannot recover, leaving the guest unable to communicate with the new host. This adds a reconnect loop in cmd/gcs/main.go that re-dials the bridge after a connection loss. On each iteration a fresh Bridge and Mux are created while the Host state (containers, processes) persists across reconnections. A Publisher abstraction is added to bridge/publisher.go so that container wait goroutines spawned during CreateContainer can route exit notifications through the current bridge. When the bridge is down between reconnect iterations, notifications are dropped with a warning — the host-side shim re-queries container state after reconnecting. The defer ordering in ListenAndServe is fixed so that quitChan closes before responseChan becomes invalid, and responseChan is buffered to prevent PublishNotification from panicking on a dead bridge. Tested with Invoke-FullLmTestCycle on a two-node Hyper-V live migration setup (Node_1 -> Node_2). Migration completes at 100% and container exec works on the destination node after migration. Signed-off-by: Shreyansh Sancheti <shsancheti@microsoft.com>
05c7170 to
5fafdf4
Compare
Fixes #2669
Problem
During live migration the vsock connection between the host and the GCS (Guest Compute Service) breaks when the UVM moves to the destination node. The bridge inside the GCS drops and cannot recover —
ListenAndServereturns with an I/O error, and the GCS has no way to re-establish communication with the new host.What this does
Wraps the bridge serve call in a reconnect loop in
cmd/gcs/main.go. When the vsock connection drops, the GCS re-dials the host and callsListenAndServeagain on the same Bridge.ListenAndServealready creates fresh channels (responseChan,quitChan) on each call, so the Bridge can be reused across reconnections without resetting any state.The
Host(containers, processes, cgroups) persists across reconnections since it lives outside the Bridge.A
Publisheris added so that container wait goroutines — spawned duringCreateContainerand blocked onc.Wait()— can route exit notifications through whichever bridge is currently active. During the reconnect gap the notification is dropped, which is safe because the host-side shim re-queries container state after reconnecting.Design
No mutating RPCs (CreateContainer, ExecProcess, etc.) are in-flight when migration starts — the LM orchestrator ensures all container setup is complete before initiating migration. The only long-lived handler goroutine during migration is
waitOnProcessV2, which is blocked onselect { case exitCode := <-exitCodeChan }and doesn't touchresponseChanuntil the process exits (through Publisher). This means the Bridge can be safely reused acrossListenAndServecalls without risk of handler goroutines racing on channel state.During live migration the VM is frozen and only wakes up when the destination host shim is ready, so the vsock port should be immediately available. The reconnect loop uses a tight 100ms retry interval rather than exponential backoff.
The defer ordering in
ListenAndServeis fixed soquitChancloses beforeresponseChanbecomes invalid, andresponseChanis buffered to preventPublishNotificationfrom blocking on a dead bridge.Changes
cmd/gcs/main.gointernal/guest/bridge/bridge.goPublisherfield,ShutdownRequested(), fixed defer ordering, bufferedresponseChan, priority select guard inPublishNotificationinternal/guest/bridge/bridge_v2.goPublisher.Publish()internal/guest/bridge/publisher.gointernal/guest/bridge/publisher_test.goTesting
Tested on a two-node Hyper-V live migration setup using the
TwoNodeInfratest module:Invoke-FullLmTestCycle -Verbose— deploys LM agents, creates a UVM with an LCOW container on Node_1, migrates to Node_2, verifies 100% completion on both nodes. Containerlcow-testmigrated with pod sandbox intact.crictl exec— created an LCOW pod with our custom GCS (deployed viarootfs.vhd), started a container, exec'dcat /tmp/test.txtto verify bridge communication works after reconnect.go build,go vet,gofmtclean.