virtual grind thoughts from the virtual world

22Jun/112

vCloud Director Cell Firewall Settings – Cisco ASA

In a vCloud Director environment, vCD cells are usually placed in a DMZ network. Based on best practices, a load balancer is also used in multi-cell environments, which is placed in front of the vCD cells.

Access to/from the vCD cells should be restricted not only from the public side, but also internally. For instance, vCD cells do need to communicate with a database vlan where a database server lies and a management vlan where services such as vCenter live.

When configuring multiple vlans, certain access lists are placed between the vlans for communication. An example of this would be an access list that allows your vCD cells to communicate with the database vlan. For example, you may have an access list that allows tcp port 1521 (Oracle) from your vCD cells to your database server.

Another issue that may come up are keepalives for tcp streams between your vCD cells on one vlan and your esxi hosts on another vlan. vCloud Director will also email messages such as:

"The Cloud Director Server cannot communicate with the Cloud Director agent on host "hostname". When the agent starts responding to the Cloud Director Server, Cloud Director Server will send an email alert.

If you are using a Cisco ASA environment, this issue can be fixed easily with a feature called Dead Connection Detection.

The following config will allow you to do this:

1. Create an access-list that allows the ip addresses or subnet of your vCD cells:

access-list vcd_dcd extended permit ip host 10.10.10.10 any
access-list vcd_dcd extended permit ip host 10.10.10.11 any

or

access-list vcd_dcd extended permit ip 10.10.10.0 255.255.255.0 any

These access lists would allow your vCD cells on 10.10.10.10 and .11 or 10.10.10.0/24. Note that you can also make this access-list more specific by defining the destination, which would be your esxi hosts or subnet. An example of this would be:

access-list vcd_dcd extended permit ip host 10.10.10.10 host 10.11.11.10
access-list vcd_dcd extended permit ip host 10.10.10.10 host 10.11.11.11

or

access-list vcd_dcd extended permit ip 10.10.10.0 255.255.255.0 10.11.11.0 255.255.255.0

2. Next, you need to make a class-map:

class-map vcd_keepalive_class
match access-list vcd_dcd

3. Create a policy-map that defines your timeout and dcd settings:

policy-map vcd_keepalive_policy
class vcd_keepalive_class
set connection timeout idle 2:00:00 dcd 0:10:00 3

4. Finally, create a service policy for the interface where your vCD cells reside:

service-policy vcd_keepalive_policy interface INTNAME

* Note that you will change "INTNAME" with the ASA interface (nameif) name.

For reference, this Cisco article covers DCD in detail:

Configuring Connection Limits and Timeouts

Comments (2) Trackbacks (0)
  1. Thank you so much for pointing out these Service Policy maps. Saved me a ton of work.

    • Erik, no problem. I should mention that when I spoke to the vCloud Director engineers, they told me this would be fixed in 1.5. As a test, I removed the maps when I upgraded a few installations to 1.5 and so far they were not needed. I have not encountered a cell loosing connection yet.


Leave a comment

No trackbacks yet.