Multi-subnet setup with VPN – System VMs unreachable and agent not connecting #12909
Replies: 1 comment 1 reply
-
|
@TheKunalSen |
Beta Was this translation helpful? Give feedback.
-
|
@TheKunalSen |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am testing a multi-network setup in Apache CloudStack and running into issues when hosts are placed on different subnets.
Current Setup:
Management server and NFS server are deployed in a 192.168.1.0/24 subnet
Initially, I had KVM hosts in the same .1 network → everything worked seamlessly
I then tried to add a new host in a different public subnet (192.168.2.0/24 network)
What I Did:
Since there was no L3 switch or routing between the two subnets, I configured a site-to-site VPN using StrongSwan between the .1 and .2 networks.
VPN connectivity is working
I can reach the .2 host from the management and NFS servers
However, I had to manually add routes on all machines:
ip route add <subnet> via <vpn_gateway_ip>After this, the new host in .2 was successfully added to CloudStack
Problem:
Although the host is added successfully:
System VMs (SSVM/CPVM) are created on the .2 host and appear to be running
But:
Agent state is not Up
System VMs are not reachable
Cannot ping system VMs from management server or host
Cannot ping outside host from system VMs
Even within the .2 network, system VM IPs are not reachable
As a result:
No user VMs are getting deployed
Console access is not working
Observation:
System VMs are getting IPs in the .1 network (same as management server)
However, they are running on a host in .2 network
And those .1 system vms ips are not reachable from anywhere not even from the .1 network
This seems to create a routing / network isolation issue
Beta Was this translation helpful? Give feedback.
All reactions