virtual grind thoughts from the virtual world

29Jul/140

vCloud Director Does Not Show Storage Policies

I have been experiencing this a lot lately with vCloud Director versions 5.x. What happens is that you will add tags in the web client and then apply to them to the appropriate datastores. After performing a "Refresh Storage Policies" in vCloud Director, your newly created polices do not show up.

I have found that this is a bug and does not clear unless you empty some tables in your vCloud Director database. The script below simply empties some inventory tables and when the services are restarted, re-syncs them from vCenter.

I will say that I always gracefully stop all cells, backup my vCloud database, then run this script. Please note that this script is not intrusive to any static data.

Script:

delete from task;
update jobs set status = 3 where status = 1;
update last_jobs set status = 3 where status = 1;
delete from busy_object;
delete from QRTZ_SCHEDULER_STATE;
delete from QRTZ_FIRED_TRIGGERS;
delete from QRTZ_PAUSED_TRIGGER_GRPS;
delete from QRTZ_CALENDARS;
delete from QRTZ_TRIGGER_LISTENERS;
delete from QRTZ_BLOB_TRIGGERS;
delete from QRTZ_CRON_TRIGGERS;
delete from QRTZ_SIMPLE_TRIGGERS;
delete from QRTZ_TRIGGERS;
delete from QRTZ_JOB_LISTENERS;
delete from QRTZ_JOB_DETAILS;
delete from compute_resource_inv;
delete from custom_field_manager_inv;
delete from cluster_compute_resource_inv;
delete from datacenter_inv;
delete from datacenter_network_inv;
delete from datastore_inv;
delete from dv_portgroup_inv;
delete from dv_switch_inv;
delete from folder_inv;
delete from managed_server_inv;
delete from managed_server_datastore_inv;
delete from managed_server_network_inv;
delete from network_inv;
delete from resource_pool_inv;
delete from storage_pod_inv;
delete from task_inv;
delete from vm_inv;
delete from property_map;

3Apr/130

Using PowerCLI to Answer VM Questions

I recently have been testing some vendor's storage solutions and fast provisioning in vCloud Director. During the testing, I create a load simulator to mimic 1000 virtual machines inflating disks, testing write patterns, etc. In any case, during the testing, I was able to completely obliterate the vendor's write cache and write IOPS, causing datastore issues. This also caused 1000 virtual machines to get stalled due to datastores being filled up, and having a question placed on them.

In order to quickly answer these 1000 questions, this PowerCLI example worked like a charm:

Get-VM LoadTest* | Get-VMQuestion | Set-VMQuestion --Option "Cancel"

In this example, I wanted to answer "Cancel" on all VM's connected named LoadTest*

19Feb/120

Manually Remove vCloud Director Agent From Hosts

I recently ran in to an issue in a lab where I had to manually remove the vCD agent from an ESXi host. The command to manually do this on an ESXi5 host is:

esxcli software vib remove -n vcloud-agent

Note that this is vCloud Director 1.5.

22Jun/112

vCloud Director Cell Firewall Settings – Cisco ASA

In a vCloud Director environment, vCD cells are usually placed in a DMZ network. Based on best practices, a load balancer is also used in multi-cell environments, which is placed in front of the vCD cells.

Access to/from the vCD cells should be restricted not only from the public side, but also internally. For instance, vCD cells do need to communicate with a database vlan where a database server lies and a management vlan where services such as vCenter live.

When configuring multiple vlans, certain access lists are placed between the vlans for communication. An example of this would be an access list that allows your vCD cells to communicate with the database vlan. For example, you may have an access list that allows tcp port 1521 (Oracle) from your vCD cells to your database server.

Another issue that may come up are keepalives for tcp streams between your vCD cells on one vlan and your esxi hosts on another vlan. vCloud Director will also email messages such as:

"The Cloud Director Server cannot communicate with the Cloud Director agent on host "hostname". When the agent starts responding to the Cloud Director Server, Cloud Director Server will send an email alert.

If you are using a Cisco ASA environment, this issue can be fixed easily with a feature called Dead Connection Detection.

The following config will allow you to do this:

1. Create an access-list that allows the ip addresses or subnet of your vCD cells:

access-list vcd_dcd extended permit ip host 10.10.10.10 any
access-list vcd_dcd extended permit ip host 10.10.10.11 any

or

access-list vcd_dcd extended permit ip 10.10.10.0 255.255.255.0 any

These access lists would allow your vCD cells on 10.10.10.10 and .11 or 10.10.10.0/24. Note that you can also make this access-list more specific by defining the destination, which would be your esxi hosts or subnet. An example of this would be:

access-list vcd_dcd extended permit ip host 10.10.10.10 host 10.11.11.10
access-list vcd_dcd extended permit ip host 10.10.10.10 host 10.11.11.11

or

access-list vcd_dcd extended permit ip 10.10.10.0 255.255.255.0 10.11.11.0 255.255.255.0

2. Next, you need to make a class-map:

class-map vcd_keepalive_class
match access-list vcd_dcd

3. Create a policy-map that defines your timeout and dcd settings:

policy-map vcd_keepalive_policy
class vcd_keepalive_class
set connection timeout idle 2:00:00 dcd 0:10:00 3

4. Finally, create a service policy for the interface where your vCD cells reside:

service-policy vcd_keepalive_policy interface INTNAME

* Note that you will change "INTNAME" with the ASA interface (nameif) name.

For reference, this Cisco article covers DCD in detail:

Configuring Connection Limits and Timeouts

14May/110

Video – Using VMware vCloud Connector

A coworker, Jack Bailey, recently made a great video demonstrating VMware's vCloud Connector with iland vCloud Services. The video covers connecting to a vCloud Service Provider from a local (private) vSphere environment and shows the functionality of the product.

Link:
vCloud Connector Video

31Mar/110

VMware vCloud Partners

VMware has recently posted a new site for vCloud Partners. This site contains a lot of information on vCloud Certified partners that provide services around vCD, as well as other features.

Check it out here.

Tagged as: , , No Comments
2Mar/110

vCloud Director 1.0.1 Upgrade Fails with “cpio: chown failed – Operation not permitted”

VMware requires that the vCloud Director installation .bin file is executed as root. This file is run to perform a new installation or upgrade to an existing vCloud Director cell. I have recently run into an issue with NFS's "root squashing" feature, which prevents the upgrade from completing.

As the NFS Sourceforge page states:

Default NFS server behavior is to prevent root on client machines from having privileged access to exported files. Servers do this by mapping the "root" user to some unprivileged user (usually the user "nobody") on the server side. This is known as root squashing. Most servers, including the Linux NFS server, provide an export option to disable this behaviour and allow root on selected clients to enjoy full root privileges on exported file systems.

Unfortunately, an NFS client has no way to determine that a server is squashing root. Thus the Linux client uses NFS Version 3 ACCESS operations when an application is running on a client as root. If an application runs as a normal user, a client uses it's own authentication checking, and doesn't bother to contact the server.

If you are not using the "no_root_squash" option on your NFS exports on your NFS server, you will receive the error "cpio: chown failed - Operation not permitted". This option is needed since the /opt/vmware/cloud-director/data/transfer directory on the vCloud Director cell is actually mounted to your NFS server.

Once you enable the no_root_squash option on your NFS exports, such as:

/export/dir hostname(rw,no_root_squash)

...you will be able to write to the directory as root and the upgrade will complete.