Intro

In this post I will cover how to create a Juniper vMX Vagrant box for use with the vagrant-libvirt provider. There are a fair few steps so strap in Alice, we are going on a trip down the rabbit hole.

Prior to installing the vMX there are some required steps to get the KVM host prepped which I will not cover. These steps are covered in the Juniper documentation here.

This post assumes a working installation of Vagrant with the vagrant-libvirt plugin already installed. You can follow this post to get the vagrant-libvirt plugin installed.

For reference the following software will be used in this post.

  • Juniper vMX - 18.2R1.9
  • Vagrant - 2.1.0
  • vagrant-libvirt - 0.0.43

vMX Overview

The vMX is made up of two VMs, one for control plane and one for the forwarding plane. Although there are two separate VMs, logically they act as a single device. Conceptually a vMX looks like the below diagram.

blog/juniper-vmx-vagrant-libvirt-box-installation/vmx.svg

The first two interfaces from each VM are reserved. The first is for management and the second is used as a communication channel between the two VMs. The VCP only requires two interfaces. The VFP supports an additional 10 interfaces for data plane traffic.

Download

Navigate to the Juniper software download page and download the kvm image.

Note
You will likely need either a valid Juniper support contract and/or request access to download the vMX image.

Install

Create and change into a directory for vMX files. I like to keep my custom vagrant boxes under ~/vagrant/boxes/ .

cmd
mkdir -p ~/vagrant/boxes/juniper/vmx
cd ~/vagrant/boxes/juniper/vmx

Copy the vmx-bundle-18.2R1.9.tgz file downloaded earlier to the ~/vagrant/boxes/juniper/vmx/ directory.

cmd
cp ~/Downloads/vmx-bundle-18.2R1.9.tgz .

Extract the vmx-bundle-18.2R1.9.tgz file.

cmd
tar -xvf vmx-bundle-18.2R1.9.tgz

The vMX requires two VMs. One for the control plane and one for the forwarding plane. Copy the required files from the extracted directory to the ~/vagrant/boxes/juniper/vmx/ directory.

cmd
cp vmx/images/junos-vmx-x86-64-18.2R1.9.qcow2 vcp-hda.qcow2
cp vmx/images/vmxhdd.img vcp-hdb.qcow2
cp vmx/images/metadata-usb-re.img vcp-hdc.img
cp vmx/images/vFPC-20180605.img vfp-hda.img

Create a network for the link between the two VMs. Create a file called vmx-int with the following contents.

file
<network>
  <name>vmx-int</name>
  <bridge name='vmx-int' stp='on' delay='0'/>
  <mac address='52:54:00:0c:9f:5c'/>
</network>

Now add and start the vmx-int network.

cmd
virsh net-define vmx-int.xml
virsh net-start vmx-int

Open two terminal windows (be sure the navigate to the ~/vagrant/boxes/juniper/vmx/ directory) and launch both of the vMX vms.

cmd
# VCP

virt-install \
    --name vmx-vcp \
    --memory 1024 \
    --vcpus=1 \
    --import \
    --disk path=vcp-hda.qcow2,size=16,bus=ide,format=qcow2 \
    --disk path=vcp-hdb.qcow2,size=1,bus=ide,format=qcow2 \
    --disk path=vcp-hdc.img,size=1,bus=ide,format=raw \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vmx-int,model=virtio \
    --graphics none
cmd
# VFP

virt-install \
    --name vmx-vfp \
    --memory 4096 \
    --vcpus=3 \
    --import \
    --disk path=vfp-hda.img,size=16,bus=ide,format=raw \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vmx-int,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --network=network:vagrant-libvirt,model=virtio \
    --graphics none

You will be automatically connected to the virtual console on both VMs.

VCP Configuration

Login to the VCP with the username root there is no password.

cmd
# VCP

FreeBSD/amd64 (Amnesiac) (ttyu0)

login: root

--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
root@:~ #
root@:~ # cli
root> configure

Paste in the initial bootstrap configuration to allow SSH and set the root password to Juniper .

cmd
# VCP

delete chassis auto-image-upgrade
set system services ssh root-login allow
set system root-authentication plain-text-password

New password: <Juniper>
Retype new password: <Juniper>

commit and-quit

There is a limit to how many characters can be pasted into the terminal over the console so grab the fxp0 ip address and SSH to the guest to finish the configuration.

cmd
# VCP

show interfaces terse | match fxp0.0

fxp0.0                  up    up   inet     192.168.121.151/24

From another terminal SSH to the IP and paste in the rest of the bootstrap configuration.

cmd
# VCP

ssh root@192.168.121.151
Password: <Juniper>
Last login: Thu Aug 30 13:11:05 2018
--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
root@:~ # cli
root> configure
Entering configuration mode

[edit]
root#
cmd
# VCP

set system login user vagrant uid 2000
set system login user vagrant class super-user
set system login user vagrant authentication encrypted-password "$6$lUoxFIpU$ZiC3/b2UNpGvasntcvkNV4WVDJXBeZ.VqE6yPezbaGOos17vM9BhlTH.LBJb9DZE8fCoBtH0RWNhrbaYpZqxJ/"
set system login user vagrant authentication ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
set system root-authentication encrypted-password "$1$nq.N1UsY$JxA/ESAj3KuXseXE597gg0"
set system root-authentication ssh-rsa "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
set system host-name vmx
set system services ssh root-login allow
set system services netconf ssh
commit and-quit

Exit out and confirm you can SSH in with the vagrant user and the vagrant insecure_private_key key.

cmd
# VCP

ssh vagrant@192.168.121.151 -i ~/.vagrant.d/insecure_private_key
--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
vagrant@vmx>

That's it for the VCP configuration, confirm that the VFP is connected to the VCP. In the output below slot 0 is online indicating the connection the the VFP is good.

cmd
# VCP

show chassis fpc
                     Temp  CPU Utilization (%)   CPU Utilization (%)  Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      1min   5min   15min  DRAM (MB) Heap     Buffer
  0  Online           Testing  10         0       24     21     12    511        31          0
  1  Empty
  2  Empty
  3  Empty
  4  Empty
  5  Empty
  6  Empty
  7  Empty
  8  Empty
  9  Empty
 10  Empty
 11  Empty

If that's all well and good power off the VCP.

cmd
# VCP

request system power-off

Power Off the system ? [yes,no] (no) yes

To exit the console use one of these key combinations (Assuming English keyboard).

  • CTRL + ]
  • CTRL + 5
  • Press and hold CTRL and SHIFT while pressing 6 then ]

VFP Configuration

Login to the VFP with the username / password root and prepare the VM for use with Vagrant.

First up edit the SSH config.

cmd
# VFP

sed -i  's/PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config
sed -i  's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
sed -i  's/#PubkeyAuthentication yes/PubkeyAuthentication yes/' /etc/ssh/sshd_config
service sshd restart

Again due to the character limit when connected to the console grab the ext IP address and SSH to the guest.

cmd
# VFP

ip addr | grep 192.168.121
    inet 192.168.121.107/24 brd 192.168.121.255 scope global ext
cmd
# VFP

ssh root@192.168.121.107
root@192.168.121.107's password: <root>
Last login: Thu Aug 30 13:26:03 2018
root@localhost:~#

The VFP uses the Windriver linux OS and most of the file system is read only. I could not figure out how to add users so I will utilize the root user to login to this host with Vagrant. I also could not write to the /root directory which is the root users default home directory so I will change that to /etc/root . I will also utilize the Vagrant insecure_private_key with the root user.

cmd
# VFP

sed -i  's/root:x:0:0:root:\/root:\/bin\/sh/root:x:0:0:root:\/etc\/root:\/bin\/sh/' /etc/passwd
mkdir -p /etc/root/.ssh
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key" >> /etc/root/.ssh/authorized_keys

The sshd service is always disabled when the VFP boots. Add a cron task to restart the sshd service every minute. This forces the sshd service to start up a minute after the VFP boots.

cmd
# VFP

crontab -e

# Add this line

* * * * * /bin/service sshd restart
Note
When the sshd service restarts if you are connected to the box via SSH you will be disconnected. This is not a big deal as you do not need to connect to the VFP during normal use of Vagrant.

Exit out and confirm you can SSH in with the vagrant user and the vagrant insecure_private_key key.

cmd
# VFP

ssh root@192.168.121.107 -i ~/.vagrant.d/insecure_private_key

Last login: Thu Aug 30 13:29:22 2018 from 192.168.121.1
root@localhost:~#

If you logged in successfully shutdown the VM.

cmd
# VFP

/sbin/shutdown -h now

To exit the console use one of these key combinations (Assuming English keyboard).

  • CTRL + ]
  • CTRL + 5
  • Press and hold CTRL and SHIFT while pressing 6 then ]

VCP Box Creation

The good folks who maintain the vagrant-libvirt plugin have a script can be used to convert qcow2 images to a vagrant box. Download the libvirt conversion script.

cmd
curl -O https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/tools/create_box.sh

Create a file called metadata.json with the following contents for the VCP box.

cmd
echo '{"provider":"libvirt","format":"qcow2","virtual_size":28}' > metadata.json

Create a VCP vagrant box with the create_box.sh script.

cmd
bash create_box.sh vcp-hda.qcow2

# output

{28}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 1356738560 (1.3GiB, 21MiB/s)
==> vcp-hda.box created
==> You can now add the box:
==>   'vagrant box add vcp-hda.box --name vcp-hda'

Create a metadata file called vcp.json so that the box is added with the correct version number.

cmd
cat << EOF > vcp.json
{
  "name": "juniper/vmx-vcp",
  "description": "Juniper vMX-VCP",
  "versions": [
    {
      "version": "18.2R1.9",
      "providers": [
        {
          "name": "libvirt",
          "url": "file:///home/bradmin/vagrant/boxes/juniper/vmx/vcp-hda.box"
        }
      ]
    }
  ]
}
EOF

Add the VCP box to Vagrant.

cmd
vagrant box add vcp.json

# output

==> box: Loading metadata for box 'vcp.json'
    box: URL: file:///home/bradmin/vagrant/boxes/juniper/vmx/vcp.json
==> box: Adding box 'juniper/vmx-vcp' (v18.2R1.9) for provider: libvirt
    box: Unpacking necessary files from: file:///home/bradmin/vagrant/boxes/juniper/vmx/vcp-hda.box
==> box: Successfully added box 'juniper/vmx-vcp' (v18.2R1.9) for 'libvirt'!

Confirm the VCP box was added successfully

cmd
vagrant box list

# output

CumulusCommunity/cumulus-vx          (libvirt, 3.4.2)
arista/veos                          (libvirt, 4.20.1F)
extreme/xos                   (libvirt, 22.4.1.4)
juniper/vmx-vcp               (libvirt, 18.2R1.9)

The VCP also requires the hdb and hdc drives from the initial VCP build. There is a limitation of the vagrant-libvirt provider where it can only package in one storage volume with the box. Create a directory to store these volumes and they will be copied by Vagrant prior to building a box using the triggers feature.

cmd
sudo mkdir /opt/vagrant/storage
sudo chmod 0775 /opt/vagrant/storage/
sudo chown root:libvirtd /opt/vagrant/storage/

mv vcp-hdb.qcow2 /opt/vagrant/storage/vmx-vcp-hdb-18.2R1.9-base.qcow2
mv vcp-hdc.img /opt/vagrant/storage/vmx-vcp-hdc-18.2R1.9-base.img

VFP Box Creation

Create a file called metadata.json with the following contents for the VCP box.

cmd
echo '{"provider":"libvirt","format":"qcow2","virtual_size":3}' > metadata.json

Convert vfp-hda.img to the qcow2 format.

cmd
qemu-img convert -f raw -O qcow2 vfp-hda.img vfp-hda.qcow2

Create a VFP vagrant box with the create_box.sh script.

cmd
bash create_box.sh vfp-hda.qcow2

# output

{3}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 333250560 (318MiB, 17MiB/s)
==> vfp-hda.box created
==> You can now add the box:
==>   'vagrant box add vfp-hda.box --name vfp-hda'

Create a metadata file called vfp.json so that the box is added with the correct version number.

cmd
cat << EOF > vfp.json
{
  "name": "juniper/vmx-vfp",
  "description": "Juniper vMX-VFP",
  "versions": [
    {
      "version": "18.2R1.9",
      "providers": [
        {
          "name": "libvirt",
          "url": "file:///home/bradmin/vagrant/boxes/juniper/vmx/vfp-hda.box"
        }
      ]
    }
  ]
}
EOF

Add the VFP box to Vagrant.

cmd
vagrant box add vfp.json

# output

==> box: Loading metadata for box 'vfp.json'
    box: URL: file:///home/bradmin/vagrant/boxes/juniper/vmx/vfp.json
==> box: Adding box 'juniper/vmx-vfp' (v18.2R1.9) for provider: libvirt
    box: Unpacking necessary files from: file:///home/bradmin/vagrant/boxes/juniper/vmx/vfp-hda.box
==> box: Successfully added box 'juniper/vmx-vfp' (v18.2R1.9) for 'libvirt'!

Confirm the VFP box was added successfully

cmd
vagrant box list

# output

CumulusCommunity/cumulus-vx          (libvirt, 3.4.2)
arista/veos                          (libvirt, 4.20.1F)
extreme/xos                   (libvirt, 22.4.1.4)
juniper/vmx-vcp               (libvirt, 18.2R1.9)
juniper/vmx-vfp               (libvirt, 18.2R1.9)

Testing

Now that the boxes are installed create a Vagrantfile and confirm you can build VMs with Vagrant.

Create a test directory.

cmd
mkdir ~/vmx-test && cd ~/vmx-test

Add a Vagrantfile to the test directory with the following contents

file
# -*- mode: ruby -*-
# vi: set ft=ruby :

def get_mac(oui="28:b7:ad")
  "Generate a MAC address"
  nic = (1..3).map{"%0.2x"%rand(256)}.join(":")
  return "#{oui}:#{nic}"
end

cwd = Dir.pwd.split("/").last
username = ENV['USER']
domain_prefix = "#{username}_#{cwd}"

Vagrant.configure("2") do |config|

  config.vm.define "vmx-vcp-01" do |node|
    guest_name = "vmx-vcp-01"
    node.vm.box = "juniper/vmx-vcp"
    node.vm.box_version = "18.2R1.9"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 1
      domain.memory = 1024
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 1
      domain.storage :file, :path => "#{username}-#{guest_name}-vmx-vcp-hdb-18.2R1.9-base.qcow2", :size => "196870144", :type => "qcow2", :bus => "ide", :device => "hdb", :allow_existing => true
      domain.storage :file, :path => "#{username}-#{guest_name}-vmx-vcp-hdc-18.2R1.9-base.img", :size => "16777216", :type => "raw", :bus => "ide", :device => "hdc", :allow_existing => true
    end
    add_volumes = [
      "virsh vol-create-as disk1 #{username}-#{guest_name}-vmx-vcp-hdb-18.2R1.9-base.qcow2 196870144",
      "sleep 1",
      "virsh vol-upload --pool disk1 #{username}-#{guest_name}-vmx-vcp-hdb-18.2R1.9-base.qcow2 /opt/vagrant/storage/vmx-vcp-hdb-18.2R1.9-base.qcow2",
      "sleep 1",
      "virsh vol-create-as disk1 #{username}-#{guest_name}-vmx-vcp-hdc-18.2R1.9-base.img 16777216",
      "sleep 1",
      "virsh vol-upload --pool disk1 #{username}-#{guest_name}-vmx-vcp-hdc-18.2R1.9-base.img /opt/vagrant/storage/vmx-vcp-hdc-18.2R1.9-base.img",
      "sleep 1"
    ]
    add_volumes.each do |i|
      node.trigger.before :up do |trigger|
        trigger.name = "add-volumes"
        trigger.info = "Adding Volumes"
        trigger.run = {inline: i}
      end
    end

    delete_volumes = [
      "virsh vol-delete #{username}-#{guest_name}-vmx-vcp-hdb-18.2R1.9-base.qcow2 disk1",
      "virsh vol-delete #{username}-#{guest_name}-vmx-vcp-hdc-18.2R1.9-base.img disk1"
    ]
    delete_volumes.each do |i|
      node.trigger.after :destroy do |trigger|
        trigger.name = "remove-volumes"
        trigger.info = "Removing Volumes"
        trigger.run = {inline: i}
      end
    end

    node.vm.network :private_network,
      # vmx-vcp-01-int1 <--> vmx-vfp-01-int1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.255.255.1",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.255.255.2",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "internal",
      auto_config: false

  end
  config.vm.define "vmx-vfp-01" do |node|
    guest_name = "vmx-vfp-01"
    node.vm.box = "juniper/vmx-vfp"
    node.vm.box_version = "18.2R1.9"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false
    node.ssh.username = "root"

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 3
      domain.memory = 4096
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 11
    end

    node.vm.network :private_network,
      # vmx-vfp-01-int1 <--> vmx-vcp-01-int1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.255.255.2",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.255.255.1",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "internal",
      auto_config: false

  end
end

The two triggers in the Vagrantfile are used to copy the hdb and hdc base storage volumes that are required by the VCP prior to build. They also delete to storage volumes when the VCP is destroyed.

Now vagrant up and confirm you can login to the VCP.

cmd
vagrant up

# output

Bringing machine 'vmx-vcp-1' up with 'libvirt' provider...
Bringing machine 'vmx-vfp-1' up with 'libvirt' provider...
==> vmx-vcp-1: Running triggers before up ...
==> vmx-vfp-1: Checking if box 'juniper/vmx-vfp' is up to date...
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-create-as disk1 vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img 1G
    vmx-vcp-1: Vol vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img created
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: sleep 1
==> vmx-vfp-1: Uploading base box image as volume into libvirt storage...
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-upload --pool disk1 vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img /opt/vagrant/storage/vmx-vcp-hdb-18.2R1.9-base.img
Progress: 36%
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: sleep 1
==> vmx-vfp-1: Creating image (snapshot of base box volume).
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
==> vmx-vfp-1: Creating domain with the following settings...
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-create-as disk1 vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img 1G
==> vmx-vfp-1:  -- Name:              vmx-test_vmx-vfp-1
==> vmx-vfp-1:  -- Domain type:       kvm
==> vmx-vfp-1:  -- Cpus:              3
==> vmx-vfp-1:
==> vmx-vfp-1:  -- Feature:           acpi
==> vmx-vfp-1:  -- Feature:           apic
==> vmx-vfp-1:  -- Feature:           pae
==> vmx-vfp-1:  -- Memory:            4096M
==> vmx-vfp-1:  -- Management MAC:
==> vmx-vfp-1:  -- Loader:
==> vmx-vfp-1:  -- Base box:          juniper/vmx-vfp
==> vmx-vfp-1:  -- Storage pool:      disk1
==> vmx-vfp-1:  -- Image:             /disk1/libvirt/images/vmx-test_vmx-vfp-1.img (3G)
==> vmx-vfp-1:  -- Volume Cache:      default
==> vmx-vfp-1:  -- Kernel:
==> vmx-vfp-1:  -- Initrd:
==> vmx-vfp-1:  -- Graphics Type:     vnc
==> vmx-vfp-1:  -- Graphics Port:     -1
==> vmx-vfp-1:  -- Graphics IP:       127.0.0.1
==> vmx-vfp-1:  -- Graphics Password: Not defined
==> vmx-vfp-1:  -- Video Type:        cirrus
==> vmx-vfp-1:  -- Video VRAM:        9216
==> vmx-vfp-1:  -- Sound Type:
==> vmx-vfp-1:  -- Keymap:            en-us
==> vmx-vfp-1:  -- TPM Path:
==> vmx-vfp-1:  -- INPUT:             type=mouse, bus=ps2
==> vmx-vfp-1: Creating shared folders metadata...
==> vmx-vfp-1: Starting domain.
    vmx-vcp-1: Vol vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img created
==> vmx-vfp-1: Waiting for domain to get an IP address...
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: sleep 1
==> vmx-vcp-1: Running trigger: add-volumes...
==> vmx-vcp-1: Adding Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-upload --pool disk1 vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img /opt/vagrant/storage/vmx-vcp-hdc-18.2R1.9-base.img

==> vmx-vcp-1: Checking if box 'juniper/vmx-vcp' is up to date...
==> vmx-vcp-1: Uploading base box image as volume into libvirt storage...
==> vmx-vcp-1: Creating image (snapshot of base box volume).
==> vmx-vfp-1: Waiting for SSH to become available...
==> vmx-vcp-1: Creating domain with the following settings...
==> vmx-vcp-1:  -- Name:              vmx-test_vmx-vcp-1
==> vmx-vcp-1:  -- Domain type:       kvm
==> vmx-vcp-1:  -- Cpus:              1
==> vmx-vcp-1:
==> vmx-vcp-1:  -- Feature:           acpi
==> vmx-vcp-1:  -- Feature:           apic
==> vmx-vcp-1:  -- Feature:           pae
==> vmx-vcp-1:  -- Memory:            1024M
==> vmx-vcp-1:  -- Management MAC:
==> vmx-vcp-1:  -- Loader:
==> vmx-vcp-1:  -- Base box:          juniper/vmx-vcp
==> vmx-vcp-1:  -- Storage pool:      disk1
==> vmx-vcp-1:  -- Image:             /disk1/libvirt/images/vmx-test_vmx-vcp-1.img (28G)
==> vmx-vcp-1:  -- Volume Cache:      default
==> vmx-vcp-1:  -- Kernel:
==> vmx-vcp-1:  -- Initrd:
==> vmx-vcp-1:  -- Graphics Type:     vnc
==> vmx-vcp-1:  -- Graphics Port:     -1
==> vmx-vcp-1:  -- Graphics IP:       127.0.0.1
==> vmx-vcp-1:  -- Graphics Password: Not defined
==> vmx-vcp-1:  -- Video Type:        cirrus
==> vmx-vcp-1:  -- Video VRAM:        9216
==> vmx-vcp-1:  -- Sound Type:
==> vmx-vcp-1:  -- Keymap:            en-us
==> vmx-vcp-1:  -- TPM Path:
==> vmx-vcp-1:  -- Disks:         hdb(qcow2,1G), hdc(raw,1G)
==> vmx-vcp-1:  -- Disk(hdb):     /disk1/libvirt/images/vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img (Remove only manually) Not created - using existed.
==> vmx-vcp-1:  -- Disk(hdc):     /disk1/libvirt/images/vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img (Remove only manually) Not created - using existed.
==> vmx-vcp-1:  -- INPUT:             type=mouse, bus=ps2
==> vmx-vcp-1: Creating shared folders metadata...
==> vmx-vcp-1: Starting domain.
==> vmx-vcp-1: Waiting for domain to get an IP address...
==> vmx-vfp-1: Configuring and enabling network interfaces...
==> vmx-vcp-1: Waiting for SSH to become available...
==> vmx-vcp-1: Configuring and enabling network interfaces...

Confirm you can login to the VCP box.

cmd
# Host terminal

vagrant ssh vmx-vcp-1
Last login: Thu Aug 30 13:18:44 2018 from 192.168.121.1
--- JUNOS 18.2R1.9 Kernel 64-bit  JNPR-11.0-20180614.6c3f819_buil
vagrant@vmx>

# Now in vagrant box terminal

vagrant@vmx> show version
Hostname: vmx
Model: vmx
Junos: 18.2R1.9
JUNOS OS Kernel 64-bit  [20180614.6c3f819_builder_stable_11]
JUNOS OS libs [20180614.6c3f819_builder_stable_11]
JUNOS OS runtime [20180614.6c3f819_builder_stable_11]
JUNOS OS time zone information [20180614.6c3f819_builder_stable_11]
JUNOS network stack and utilities [20180628.003405_builder_junos_182_r1]
JUNOS libs [20180628.003405_builder_junos_182_r1]
JUNOS OS libs compat32 [20180614.6c3f819_builder_stable_11]
.
. <snip>
.
JUNOS AppId Services [20180628.003405_builder_junos_182_r1]
JUNOS Services Application Level Gateways [20180628.003405_builder_junos_182_r1]
JUNOS Services AACL Container package [20180628.003405_builder_junos_182_r1]
JUNOS Extension Toolkit [20180628.003405_builder_junos_182_r1]
JUNOS J-Insight [20180628.003405_builder_junos_182_r1]
JUNOS Online Documentation [20180628.003405_builder_junos_182_r1]
JUNOS jail runtime [20180614.6c3f819_builder_stable_11]
Note
At this point you can add a trial license if you desire. The trial license can be found here and the process for installing it can be found here.

We have confirmed we can build and login to the box with Vagrant. Lastly, let clean up a bit.

cmd
# back in host shell

vagrant destroy -f

# output

==> vmx-vfp-1: Removing domain...
==> vmx-vcp-1: Removing domain...
==> vmx-vcp-1: Running triggers after destroy ...
==> vmx-vcp-1: Running trigger: remove-volumes...
==> vmx-vcp-1: Removing Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-delete vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img disk1
    vmx-vcp-1: Vol vmx-vcp-1-vmx-vcp-hdb-18.2R1.9.img deleted
==> vmx-vcp-1: Running trigger: remove-volumes...
==> vmx-vcp-1: Removing Volumes
    vmx-vcp-1: Running local: Inline script
    vmx-vcp-1: virsh vol-delete vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img disk1
    vmx-vcp-1: Vol vmx-vcp-1-vmx-vcp-hdc-18.2R1.9.img deleted

Remove the original KVM images.

cmd
rm -f ~/vagrant/boxes/juniper/vmx/*.{img,qcow2}
rm -rf ~/vagrant/boxes/juniper/vmx/vmx

# output

virsh undefine vmx-vcp
Domain vmx-vcp has been undefined

virsh undefine vmx-vfp
Domain vmx-vfp has been undefined

Remove the vmx-int network.

cmd
virsh net-destroy vmx-int
virsh net-undefine vmx-int

Outro

The Juniper vMX is great for use in a lab to test out Juniper related features. They require two VMs but I think that is worth the extra effort to be able to run a vMX for routing based labs instead of vSRX in packet mode.