In this post I will install a Viptela vManage appliance for use with the vagrant-libvirt provider.

For reference the following software will be used in this post.

  • Viptela vManage - 16.3.2
  • Vagrant - Vagrant 2.0.1
  • vagrant-libvirt - 0.4.0
  • Ubuntu - 1604


Viptela does not host any boxes on Vagrant cloud so we will need to build one. Go to and dowload the 16.3.2 vManage KVM image for new deployments.


You will either need be an existing customer/partner or speak to your Cisco/Viptela account manager to gain access to the Viptela support portal.


Create and change into directory for vManage files. I like to keep my custom vagrant boxes under ~/vagrant/boxes/.


mkdir -p ~/vagrant/boxes/viptela
cd ~/vagrant/boxes/viptela

Copy the .qcow2 file downloaded earlier to the ~/vagrant/boxes/viptela directory.


cp ~/Downloads/viptela-vmanage-genericx86-64.qcow2 .

The vManage requires two volumes; The one that you downloaded is used for the vManage software and a second storage volume which is used for the database. Create a 100 Gig volume with the qemu-img command.


qemu-img create -f qcow2 vmanage-disk2-base.qcow2 100G

# output
Formatting 'vmanage-disk2-base.qcow2', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

Note: The volume will be 'thin provisioned' and not actually use 100 Gig of space.

Use the below virt-install command to build the vManage VM. Assign 2 CPU's and 8 Gig of RAM otherwise the VM will take forever to boot.


virt-install \
    --name vmanage-base \
    --os-type linux \
    --os-variant ubuntu14.04 \
    --cpu host \
    --vcpus=2 \
    --hvm \
    --arch=x86_64 \
    --ram 8192 \
    --disk path=viptela-vmanage-genericx86-64.qcow2,size=16,device=disk,bus=ide,format=qcow2 \
    --disk path=vmanage-disk2-base.qcow2,size=16,device=disk,bus=ide,format=qcow2 \
    --network=network:vagrant-libvirt,model=virtio \
    --graphics none \

A virtual console will be connected to the VM and you will see the below. It will take a minunte or so for the VM to boot, once it does login with the username: admin and password: admin.


Starting install...
Creating domain...                                                                                                                                                                          |    0 B  00:00:01
Connected to domain vmanage-base
Escape character is ^]
viptela 16.3.2
vmanage login:

Once logged in, select the 100Gig disk you built earlier to install the database. You should only see one option: hdb. Select 1 for hdb and y for the next two prompts.


Welcome to Viptela CLI
admin connected from using console on vmanage
Available storage devices:
hdb	100GB
1) hdb

Select storage device to use: 1
Would you like to format hdb? (y/n): y
kernel.hung_task_panic = 0
mke2fs 1.42.9 (28-Dec-2013)
/dev/hdb is entire device, not just one partition!
Proceed anyway? (y,n) y

When the database install is complete the vManage will reboot. A sample output of the database install is below.


Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

vmanage# The system is going down for reboot NOW!

Once the vManage has rebooted, login with the username/password admin and apply the following bootstrap configuration.


  user vagrant
   password $6$yW1ez6tP1rqv15nA$2uA0TXEIuCuGnPa4bUVCSN4o5b.wU/WMzBa4kjYvBz4B/EbbJY3he9CMaY0UWPi0joUqUxYC.F6xbYHQQcq2o/
   group    netadmin
  vpn 0
   no int eth0
  vpn 512
   int eth0
    ip dhcp-client
    no shutdown
  commit and-quit

The bootstrap config will create the vagrant user with the password vagrant but we still need to load the vagrant ssh key.

Login to the vshell with the vshell command.


# viptela_cli
vmanage# vshell

# Now in vshell

Switch to the vagrant user.


su vagrant
password: <vagrant>

# log back into vshell
vmanage# vshell

Create the .ssh directory.


mkdir /home/vagrant/.ssh && cd /home/vagrant/.ssh

Use vim to add the vagrant public key to the authorized_keys file.


ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key

Exit back to the viptela_cli and use the show int | tab command to get the IP address assigned to the eth0 interface.


vsmart:/home/vagrant/.ssh$ exit

#vmanage show int | tab

# output
                                         IF      IF                                                          TCP
                AF                       ADMIN   OPER    ENCAP  PORT                          SPEED          MSS             RX       TX
512  eth0       ipv4  Up      Up      null   mgmt  -    52:54:00:4b:b8:a4  -      -       -       -       447      102

From another terminal, confirm that you can ssh to the vManage with the Vagrant insecure_private_key.


ssh vagrant@ -i ~/.vagrant.d/insecure_private_key

# Success
Welcome to Viptela CLI
vagrant connected from using ssh on vmanage

If you successfully connected, poweroff the vManage in preparation for the Vagrant box creation.


vmanage# poweroff
Are you sure you want to power off the system? [yes,NO] yes

To exit the console press and hold CTRL and SHIFT while pressing 6 then ].

Box Creation

The maintainers of the vagrant-libvirt plugin have a script that can be used to convert .qcow2 images to a vagrant box. Download the libvirt conversion script.


curl -O

Create a metadata.json file.


echo '{"provider":"libvirt","format":"qcow2","virtual_size":16}' > metadata.json

Use the script to make a Vagrant box from the qcow2 image.


bash viptela-vmanage-genericx86-64.qcow2

# output
==> Creating box, tarring and gzipping
Total bytes written: 1005465600 (959MiB, 38MiB/s)
==> created
==> You can now add the box:
==>   'vagrant box add --name viptela-vmanage-genericx86-64'


Currently the vagrant-libvirt provider only supports the packaging of one storage devices as described in this github issue. This leads to a caveat when working with the vManage image. I will go into how to work around this later on in the post.

Create a metadata file called vmanage.json so that the box is added with the correct version number.


  "name": "viptela/vmanage",
  "description": "Viptela vManage",
  "versions": [
      "version": "16.3.2",
      "providers": [
          "name": "libvirt",
          "url": "file:///home/bradmin/vagrant/boxes/viptela/"

Add the box to Vagrant using the vmanage.json file.


vagrant box add vmanage.json

# output
==> box: Loading metadata for box 'vmanage.json'
    box: URL: file:///home/bradmin/vagrant/boxes/viptela/vmanage.json
==> box: Adding box 'viptela/vmanage' (v16.3.2) for provider: libvirt
    box: Unpacking necessary files from: file:///home/bradmin/vagrant/boxes/viptela/
==> box: Successfully added box 'viptela/vmanage' (v16.3.2) for 'libvirt'!

Confirm the vManage box was added successfully.


vagrant box list

# output
CumulusCommunity/cumulus-vx          (libvirt, 3.4.1)
. <snip>
viptela/vmanage                      (libvirt, 16.3.2)
vyos/current                         (libvirt, 999.201712301253)

Storage Pool

To work around the limitation of packaging one HDD with the Vagrant box, we will need to create a storage pool for the database volume.

The caveat and it's a pretty big one, is that we will need to copy the database volume to the pool EACH time we want to build the vmanage to ensure we have a fresh base volume. Additionally the volume will also need to be deleted manually once the vManage vagrant box is destroyed.

There may be a better way to do this but I have not yet found it. If I do I will update this post, if you have a better way please feel free to reach out to me on twitter.

Ok got all that, good, now let's create a directory to use for the storage pool. I will create a pool named viptela-storage in the ~/vagrant/boxes/viptela/storage directory.


mkdir ~/vagrant/boxes/viptela/storage

Create an XML file with the following contents that defines the storage pool details. Note: You will need to adjust the directory to your specific location.


<pool type='dir'>
  <capacity unit='bytes'>950504235008</capacity>
  <allocation unit='bytes'>680520564736</allocation>
  <available unit='bytes'>269983670272</available>

Use the virsh pool-create command to create the storage pool.


virsh pool-create viptela-storage.xml

# output
Pool viptela-storage created from viptela-storage.xml


Use this Vagrantfile to test out the new vManage Vagrant box.


# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config| = "viptela/vmanage"

  # Turn off shared folders
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

  # Do not try to insert new SSH key
  config.ssh.insert_key = false

  # Give VM time to boot
  config.vm.boot_timeout = 180

  # Provider-specific configuration
  config.vm.provider :libvirt do |domain|
    domain.memory = 8192
    domain.cpus = 2 :file, :size => "100G", :path => "vmanage-disk2.qcow2", :allow_existing => true, :type => "qcow2", :bus => "ide", :device => "hdb"
    domain.storage_pool_name = "viptela-storage"


  # Additional Interfaces "private_network", ip: "", auto_config: false "private_network", ip: "", auto_config: false



Prior to building the box copy the database volume to the viptela-storage pool.


cp vmanage-disk2-base.qcow2 storage/vmanage-disk2.qcow2

Once the volume is copied to the storage pool directory you need to refresh the pool. This just does a re-read of the directory to discover the existing contents but it is a required step.


virsh pool-refresh viptela-storage

# output
Pool viptela-storage refreshed

Lets vagrant up and make sure we can connect to the vManage.


vagrant up

# output
Bringing machine 'default' up with 'libvirt' provider...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              viptela_default
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              2
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Memory:            8192M
==> default:  -- Management MAC:
==> default:  -- Loader:
==> default:  -- Base box:          viptela/vmanage
==> default:  -- Storage pool:      viptela-storage
==> default:  -- Image:             /home/bradmin/vagrant/boxes/viptela/storage/viptela_default.img (19G)
==> default:  -- Volume Cache:      default
==> default:  -- Kernel:
==> default:  -- Initrd:
==> default:  -- Graphics Type:     vnc
==> default:  -- Graphics Port:     5900
==> default:  -- Graphics IP:
==> default:  -- Graphics Password: Not defined
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        9216
==> default:  -- Sound Type:
==> default:  -- Keymap:            en-us
==> default:  -- TPM Path:
==> default:  -- Disks:         hdb(qcow2,100G)
==> default:  -- Disk(hdb):     /home/bradmin/vagrant/boxes/viptela/storage/vmanage-disk2.qcow2 (Remove only manually) Not created - using existed.
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Configuring and enabling network interfaces...

The guest operating system of the machine could not be detected!
Vagrant requires this knowledge to perform specific tasks such
as mounting shared folders and configuring networks. Please add
the ability to detect this guest operating system to Vagrant
by creating a plugin or reporting a bug.


The error above relates to guest OS detection. The box built correctly and you can login and use the box as normal however, Vagrant could not detect the type of OS the box uses. Vagrant needs to know the OS type for provisioning purposes and since we are not doing any provisioning with Vagrant we can ignore it. I could not find a way to disable the guest detection and previous methods I have used with the config.vm.guest Vagrantfile parameter proved fruitless. I suppose this is a good reason to knuckle down with Ruby and build a guest plugin.

Now with that out of the way, lets test SSH to vManage.


# from host shell
vagrant ssh

# now in vManage shell
Welcome to Viptela CLI
vagrant connected from using ssh on vmanage
vmanage# show version

Awesome, we are able to connect now lets clean up the box.


# from host shell
vagrant destroy -f

# output
==> default: Removing domain...

Dont forget to delete the database volume. Use the virsh vol-delete command to do this.


virsh vol-delete vmanage-disk2.qcow2 --pool viptela-storage

# output
Vol vmanage-disk2.qcow2 deleted

You can delete the storage pool with the virsh pool-destroy command.


virsh pool-destroy viptela-storage

# output
Pool viptela-storage destroyed


The Cisco/Viptela SD-WAN solution is an excellent product. Now you a have disposable vManage Vagrant box for all your testing and lab purposes.

vagrant cisco viptela libvirt