Intro

In this post I will cover how to create a Juniper vQFX Vagrant box 18.4R1.8 for use with the vagrant-libvirt provider. This is an update to a previous post post . Thankfully Juniper has released newer versions of the vQFX vagrant boxes for the Virtual box provider that can be utilized to create Libvirt boxes in a much simpler fashion that before.

With other Juniper VM's such as the vSRX and vMX there are some required steps to get the KVM host prepped, these may be required for the vQFX also. I did not find any documentation directly stating this but if you have issues this could be a good place to look.

This post assumes a working installation of Vagrant with the vagrant-libvirt plugin already installed. You can follow this post to get the vagrant-libvirt plugin installed.

For reference the following software will be used in this post.

  • Juniper vQFX - 18.4R1.8
  • Vagrant - 2.2.4
  • vagrant-libvirt - 0.0.45

vQFX Overview

The vQFX is made up of two VMs, one for the routing engine (RE) that runs Junos and one for the packet forwarding engine (PFE) that runs wind river linux. Although there are two separate VMs, logically they act as a single device. Conceptually a vQFX looks like the below diagram.

blog/juniper-vqfx-vagrant-libvirt-box-installation/vqfx.svg

The first two interfaces from each VM are reserved. The first is for management and the second is used as a communication channel between the two VMs. Additionally the third interface on the RE is unusable. The PFE only requires two interfaces. The RE supports an additional 12 interfaces for data plane traffic.

Download

Juniper provides ready built Vagrant boxes for use with the Virtualbox provider. We will use these images to create libvirt Vagrant boxes

Navigate to the Juniper software download page and under the Application Package section download both the PFE: Vagrant Package for Virtualbox (VQFX10K PFE) and RE: Vagrant Package for Virtualbox (VQFX10K Routing Engine) boxes.

Note
You will likely need to request access to download these images which may involve raising a case. AFAIK a valid support contract is not required to get access to these images.

Install

Create and change into a directory for vQFX files. I like to keep my custom vagrant boxes under ~/vagrant/boxes/ .

cmd
mkdir -p ~/vagrant/boxes/juniper/vqfx
cd ~/vagrant/boxes/juniper/vqfx

Copy the PFE and RE files downloaded earlier to the ~/vagrant/boxes/juniper/vqfx/ directory.

cmd
cp ~/Downloads/vqfx-18.4R1.8-* .

Create directories to extract both the PFE and RE images.

cmd
mkdir {pfe,re}

PFE Install

Extract the HDD from the PFE Vagrant box.

cmd
tar -xvf vqfx-18.4R1.8-pfe-virtualbox.gz -C pfe

The PFE is already prepared as a Vagrant box so the only thing required to get it working with libvirt is convert the disk to a qcow2 image and package it up as a libvirt box.

cmd
qemu-img convert -f vmdk -O qcow2 pfe/packer-virtualbox-ovf-1520878605-disk001.vmdk pfe.qcow2

The good folks who maintain the vagrant-libvirt plugin have a script can be used to convert qcow2 images to a vagrant box. Download the libvirt conversion script.

cmd
curl -O https://raw.githubusercontent.com/vagrant-libvirt/vagrant-libvirt/master/tools/create_box.sh

Create a file called metadata.json with the following contents for the PFE box.

cmd
echo '{"provider":"libvirt","format":"qcow2","virtual_size":6}' > metadata.json

Create a PFE vagrant box with the create_box.sh script.

cmd
bash create_box.sh pfe.qcow2

# output

{6}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 1356738560 (1.3GiB, 21MiB/s)
==> pfe.box created
==> You can now add the box:
==>   'vagrant box add pfe.box --name pfe'

Create a metadata file called pfe.json so that the box is added with the correct version number.

cmd
cat << EOF > pfe.json
{
  "name": "juniper/vqfx-pfe",
  "description": "Juniper vQFX Packet Forwarding Engine",
  "versions": [
    {
      "version": "18.4R1.8",
      "providers": [
        {
          "name": "libvirt",
          "url": "file:///home/bradmin/vagrant/boxes/juniper/vqfx/pfe.box"
        }
      ]
    }
  ]
}
EOF

Add the PFE box to Vagrant.

cmd
vagrant box add pfe.json

# output

==> box: Loading metadata for box 'pfe.json'
    box: URL: file:///home/bradmin/vagrant/boxes/juniper/vqfx/pfe.json
==> box: Adding box 'juniper/vqfx-pfe' (v18.4R1.8) for provider: libvirt
    box: Unpacking necessary files from: file:///home/bradmin/vagrant/boxes/juniper/vqfx/pfe.box
==> box: Successfully added box 'juniper/vqfx-pfe' (v18.4R1.8) for 'libvirt'!

Confirm the PFE box was added successfully

cmd
vagrant box list

# output

CumulusCommunity/cumulus-vx   (libvirt, 3.4.2)
arista/veos                   (libvirt, 4.20.1F)
extreme/xos                   (libvirt, 22.4.1.4)
juniper/vmx-vcp               (libvirt, 19.1R1.6)
juniper/vmx-vfp               (libvirt, 19.1R1.6)
juniper/vqfx-pfe               (libvirt, 18.4R1.8)

RE Install

As with the PFE the RE is already prepared as a Vagrant box. The only thing required to get it working with libvirt is convert the disk to a qcow2 image and package it up as a libvirt box.

cmd
qemu-img convert -f vmdk -O qcow2 re/packer-virtualbox-ovf-1547241286-disk001.vmdk re.qcow2

Create a file called metadata.json with the following contents for the RE box.

cmd
echo '{"provider":"libvirt","format":"qcow2","virtual_size":4}' > metadata.json

Create an RE vagrant box with the create_box.sh script.

cmd
bash create_box.sh vqfx-re.qcow2

# output

{4}
==> Creating box, tarring and gzipping
./metadata.json
./Vagrantfile
./box.img
Total bytes written: 1356738560 (1.3GiB, 21MiB/s)
==> vqfx-re.box created
==> You can now add the box:
==>   'vagrant box add vqfx-re.box --name vqfx-re'

Create a metadata file called re.json so that the box is added with the correct version number.

cmd
cat << EOF > re.json
{
  "name": "juniper/vqfx-re",
  "description": "Juniper vQFX Routing Engine",
  "versions": [
    {
      "version": "18.4R1.8",
      "providers": [
        {
          "name": "libvirt",
          "url": "file:///home/bradmin/vagrant/boxes/juniper/vqfx/re.box"
        }
      ]
    }
  ]
}
EOF

Add the RE box to Vagrant.

cmd
vagrant box add re.json

# output

==> box: Loading metadata for box 're.json'
    box: URL: file:///home/bradmin/vagrant/boxes/juniper/vqfx/re.json
==> box: Adding box 'juniper/vqfx-re' (v18.4R1.8) for provider: libvirt
    box: Unpacking necessary files from: file:///home/bradmin/vagrant/boxes/juniper/vqfx/re.box
==> box: Successfully added box 'juniper/vqfx-re' (v18.4R1.8) for 'libvirt'!

Confirm the RE box was added successfully

cmd
vagrant box list

# output

CumulusCommunity/cumulus-vx   (libvirt, 3.4.2)
arista/veos                   (libvirt, 4.20.1F)
extreme/xos                   (libvirt, 22.4.1.4)
juniper/vmx-vcp               (libvirt, 19.1R1.6)
juniper/vmx-vfp               (libvirt, 19.1R1.6)
juniper/vqfx-pfe              (libvirt, 18.4R1.8)
juniper/vqfx-re               (libvirt, 18.4R1.8)

Testing

Now that the boxes are installed create a Vagrantfile and confirm you can build VMs with Vagrant.

Create a test directory.

cmd
mkdir ~/vqfx-test && cd ~/vqfx-test

Add a Vagrantfile to the test directory with the following contents.

file
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Created: 2019-09-01--07-54-13

def get_mac(oui="28:b7:ad")
  "Generate a MAC address"
  nic = (1..3).map{"%0.2x"%rand(256)}.join(":")
  return "#{oui}:#{nic}"
end

cwd = Dir.pwd.split("/").last
username = ENV['USER']
domain_prefix = "#{username}_#{cwd}"
domain_uuid = "9e7c5277-f332-5311-a344-20dc47a99a5b"

Vagrant.require_version ">= 2.1.0"
Vagrant.configure("2") do |config|

  config.vm.define "sw1" do |node|
    guest_name = "sw1"
    node.vm.box = "juniper/vqfx-re"
    node.vm.box_version = "18.4R1.8"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 1
      domain.memory = 1024
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 14
      domain.nic_model_type = "e1000"
    end

    node.vm.network :private_network,
      # sw1-internal-1 <--> sw1-pfe-internal-1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.1",
      :libvirt__tunnel_local_port => 11001,
      :libvirt__tunnel_ip => "127.197.249.2",
      :libvirt__tunnel_port => 11001,
      :libvirt__iface_name => "sw1-internal-1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw1-reserved-1 <--> blackhole-666
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.1",
      :libvirt__tunnel_local_port => 12001,
      :libvirt__tunnel_ip => "127.6.6.6",
      :libvirt__tunnel_port => 12666,
      :libvirt__iface_name => "sw1-reserved-1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw1-xe-0/0/0 <--> sw2-xe-0/0/0
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.1",
      :libvirt__tunnel_local_port => 10000,
      :libvirt__tunnel_ip => "127.197.249.3",
      :libvirt__tunnel_port => 10000,
      :libvirt__iface_name => "sw1-xe-0/0/0-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw1-xe-0/0/1 <--> sw2-xe-0/0/1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.1",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.197.249.3",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw1-xe-0/0/1-#{domain_uuid}",
      auto_config: false

  end
  config.vm.define "sw1-pfe" do |node|
    guest_name = "sw1-pfe"
    node.vm.box = "juniper/vqfx-pfe"
    node.vm.box_version = "18.4R1.8"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 1
      domain.memory = 2048
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 1
      domain.nic_model_type = "e1000"
    end

    node.vm.network :private_network,
      # sw1-pfe-internal-1 <--> sw1-internal-1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.2",
      :libvirt__tunnel_local_port => 11001,
      :libvirt__tunnel_ip => "127.197.249.1",
      :libvirt__tunnel_port => 11001,
      :libvirt__iface_name => "sw1-pfe-internal-1-#{domain_uuid}",
      auto_config: false

  end
  config.vm.define "sw2" do |node|
    guest_name = "sw2"
    node.vm.box = "juniper/vqfx-re"
    node.vm.box_version = "18.4R1.8"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 1
      domain.memory = 1024
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 14
      domain.nic_model_type = "e1000"
    end

    node.vm.network :private_network,
      # sw2-internal-1 <--> sw2-pfe-internal-1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.3",
      :libvirt__tunnel_local_port => 11001,
      :libvirt__tunnel_ip => "127.197.249.4",
      :libvirt__tunnel_port => 11001,
      :libvirt__iface_name => "sw2-internal-1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw2-reserved-1 <--> blackhole-666
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.3",
      :libvirt__tunnel_local_port => 12001,
      :libvirt__tunnel_ip => "127.6.6.6",
      :libvirt__tunnel_port => 12666,
      :libvirt__iface_name => "sw2-reserved-1-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw2-xe-0/0/0 <--> sw1-xe-0/0/0
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.3",
      :libvirt__tunnel_local_port => 10000,
      :libvirt__tunnel_ip => "127.197.249.1",
      :libvirt__tunnel_port => 10000,
      :libvirt__iface_name => "sw2-xe-0/0/0-#{domain_uuid}",
      auto_config: false

    node.vm.network :private_network,
      # sw2-xe-0/0/1 <--> sw1-xe-0/0/1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.3",
      :libvirt__tunnel_local_port => 10001,
      :libvirt__tunnel_ip => "127.197.249.1",
      :libvirt__tunnel_port => 10001,
      :libvirt__iface_name => "sw2-xe-0/0/1-#{domain_uuid}",
      auto_config: false

  end
  config.vm.define "sw2-pfe" do |node|
    guest_name = "sw2-pfe"
    node.vm.box = "juniper/vqfx-pfe"
    node.vm.box_version = "18.4R1.8"
    node.vm.guest = :tinycore
    node.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

    node.ssh.insert_key = false

    node.vm.provider :libvirt do |domain|
      domain.default_prefix = "#{domain_prefix}"
      domain.cpus = 1
      domain.memory = 2048
      domain.disk_bus = "ide"
      domain.nic_adapter_count = 1
      domain.nic_model_type = "e1000"
    end

    node.vm.network :private_network,
      # sw2-pfe-internal-1 <--> sw2-internal-1
      :mac => "#{get_mac()}",
      :libvirt__tunnel_type => "udp",
      :libvirt__tunnel_local_ip => "127.197.249.4",
      :libvirt__tunnel_local_port => 11001,
      :libvirt__tunnel_ip => "127.197.249.3",
      :libvirt__tunnel_port => 11001,
      :libvirt__iface_name => "sw2-pfe-internal-1-#{domain_uuid}",
      auto_config: false

  end

end

Now vagrant up and confirm you can login to the RE.

cmd
vagrant up

# output

Bringing machine 'sw1' up with 'libvirt' provider...
Bringing machine 'sw1-pfe' up with 'libvirt' provider...
Bringing machine 'sw2' up with 'libvirt' provider...
Bringing machine 'sw2-pfe' up with 'libvirt' provider...
==> sw1-pfe: Checking if box 'juniper/vqfx-pfe' version '18.4R1.8' is up to date...
==> sw1: Checking if box 'juniper/vqfx-re' version '18.4R1.8' is up to date...
==> sw2-pfe: Checking if box 'juniper/vqfx-pfe' version '18.4R1.8' is up to date...
==> sw2: Checking if box 'juniper/vqfx-re' version '18.4R1.8' is up to date...
==> sw2-pfe: Creating image (snapshot of base box volume).
==> sw2-pfe: Creating domain with the following settings...
==> sw2-pfe:  -- Name:              bradmin_vqfx-test_sw2-pfe
==> sw2-pfe:  -- Domain type:       kvm
==> sw2-pfe:  -- Cpus:              1
==> sw2-pfe:  -- Feature:           acpi
==> sw2-pfe:  -- Feature:           apic
==> sw2-pfe:  -- Feature:           pae
==> sw2-pfe:  -- Memory:            2048M
==> sw2-pfe:  -- Management MAC:
==> sw2-pfe:  -- Loader:
==> sw2-pfe:  -- Nvram:
==> sw2-pfe:  -- Base box:          juniper/vqfx-pfe
==> sw2-pfe:  -- Storage pool:      default
==> sw2-pfe:  -- Image:             /var/lib/libvirt/images/bradmin_vqfx-test_sw2-pfe.img (6G)
==> sw2-pfe:  -- Volume Cache:      default
==> sw2-pfe:  -- Kernel:
==> sw2-pfe:  -- Initrd:
==> sw2-pfe:  -- Graphics Type:     vnc
==> sw2-pfe:  -- Graphics Port:     -1
==> sw2-pfe:  -- Graphics IP:       127.0.0.1
==> sw2-pfe:  -- Graphics Password: Not defined
==> sw2-pfe:  -- Video Type:        cirrus
==> sw2-pfe:  -- Video VRAM:        9216
==> sw2-pfe:  -- Sound Type:
==> sw2-pfe:  -- Keymap:            en-us
==> sw2-pfe:  -- TPM Path:
==> sw2-pfe:  -- INPUT:             type=mouse, bus=ps2
==> sw2-pfe: Creating shared folders metadata...
==> sw2-pfe: Starting domain.
==> sw2-pfe: Waiting for domain to get an IP address...
==> sw1-pfe: Creating image (snapshot of base box volume).
==> sw1-pfe: Creating domain with the following settings...
==> sw1-pfe:  -- Name:              bradmin_vqfx-test_sw1-pfe
==> sw2: Creating image (snapshot of base box volume).
==> sw1: Creating image (snapshot of base box volume).
==> sw1-pfe:  -- Domain type:       kvm
==> sw2: Creating domain with the following settings...
==> sw1-pfe:  -- Cpus:              1
==> sw2:  -- Name:              bradmin_vqfx-test_sw2
==> sw1-pfe:  -- Feature:           acpi
==> sw2:  -- Domain type:       kvm
==> sw1-pfe:  -- Feature:           apic
==> sw1: Creating domain with the following settings...
==> sw1-pfe:  -- Feature:           pae
==> sw1:  -- Name:              bradmin_vqfx-test_sw1
==> sw2:  -- Cpus:              1
==> sw1-pfe:  -- Memory:            2048M
==> sw1:  -- Domain type:       kvm
==> sw2:  -- Feature:           acpi
==> sw2:  -- Feature:           apic
==> sw1-pfe:  -- Management MAC:
==> sw1:  -- Cpus:              1
==> sw1-pfe:  -- Loader:
==> sw1-pfe:  -- Nvram:
==> sw1:  -- Feature:           acpi
==> sw1-pfe:  -- Base box:          juniper/vqfx-pfe
==> sw2:  -- Feature:           pae
==> sw1:  -- Feature:           apic
==> sw1-pfe:  -- Storage pool:      default
==> sw1-pfe:  -- Image:             /var/lib/libvirt/images/bradmin_vqfx-test_sw1-pfe.img (6G)
==> sw1:  -- Feature:           pae
==> sw2:  -- Memory:            1024M
==> sw1-pfe:  -- Volume Cache:      default
==> sw1:  -- Memory:            1024M
==> sw2:  -- Management MAC:
==> sw1-pfe:  -- Kernel:
==> sw2:  -- Loader:
==> sw1:  -- Management MAC:
==> sw1-pfe:  -- Initrd:
==> sw1-pfe:  -- Graphics Type:     vnc
==> sw1:  -- Loader:
==> sw1-pfe:  -- Graphics Port:     -1
==> sw2:  -- Nvram:
==> sw2:  -- Base box:          juniper/vqfx-re
==> sw1-pfe:  -- Graphics IP:       127.0.0.1
==> sw1-pfe:  -- Graphics Password: Not defined
==> sw1:  -- Nvram:
==> sw2:  -- Storage pool:      default
==> sw1-pfe:  -- Video Type:        cirrus
==> sw1:  -- Base box:          juniper/vqfx-re
==> sw2:  -- Image:             /var/lib/libvirt/images/bradmin_vqfx-test_sw2.img (4G)
==> sw1:  -- Storage pool:      default
==> sw1-pfe:  -- Video VRAM:        9216
==> sw2:  -- Volume Cache:      default
==> sw1:  -- Image:             /var/lib/libvirt/images/bradmin_vqfx-test_sw1.img (4G)
==> sw2:  -- Kernel:
==> sw1-pfe:  -- Sound Type:
==> sw1:  -- Volume Cache:      default
==> sw2:  -- Initrd:
==> sw1-pfe:  -- Keymap:            en-us
==> sw2:  -- Graphics Type:     vnc
==> sw1:  -- Kernel:
==> sw2:  -- Graphics Port:     -1
==> sw1-pfe:  -- TPM Path:
==> sw1-pfe:  -- INPUT:             type=mouse, bus=ps2
==> sw2:  -- Graphics IP:       127.0.0.1
==> sw1:  -- Initrd:
==> sw2:  -- Graphics Password: Not defined
==> sw1:  -- Graphics Type:     vnc
==> sw2:  -- Video Type:        cirrus
==> sw1:  -- Graphics Port:     -1
==> sw1-pfe: Creating shared folders metadata...
==> sw2:  -- Video VRAM:        9216
==> sw1-pfe: Starting domain.
==> sw1:  -- Graphics IP:       127.0.0.1
==> sw2:  -- Sound Type:
==> sw1:  -- Graphics Password: Not defined
==> sw2:  -- Keymap:            en-us
==> sw1:  -- Video Type:        cirrus
==> sw2:  -- TPM Path:
==> sw1:  -- Video VRAM:        9216
==> sw1-pfe: Waiting for domain to get an IP address...
==> sw1:  -- Sound Type:
==> sw2:  -- INPUT:             type=mouse, bus=ps2
==> sw1:  -- Keymap:            en-us
==> sw1:  -- TPM Path:
==> sw1:  -- INPUT:             type=mouse, bus=ps2
==> sw2: Creating shared folders metadata...
==> sw2: Starting domain.
==> sw1: Creating shared folders metadata...
==> sw1: Starting domain.
==> sw2: Waiting for domain to get an IP address...
==> sw1: Waiting for domain to get an IP address...
==> sw2-pfe: Waiting for SSH to become available...
==> sw1-pfe: Waiting for SSH to become available...
==> sw1-pfe: Configuring and enabling network interfaces...
==> sw2-pfe: Configuring and enabling network interfaces...
==> sw1: Waiting for SSH to become available...
==> sw2: Waiting for SSH to become available...
==> sw1: Configuring and enabling network interfaces...
==> sw2: Configuring and enabling network interfaces...

Confirm you can login to the sw1 RE box.

cmd
# Host terminal

vagrant ssh sw1
--- JUNOS 18.4R1.8 built 2018-12-17 03:30:15 UTC
{master:0}
vagrant@vqfx>

# Now in vagrant box terminal

vagrant@vqfx> show version
fpc0:
--------------------------------------------------------------------------
Hostname: vqfx-re
Model: vqfx-10000
Junos: 18.4R1.8 limited
JUNOS Base OS boot [18.4R1.8]
JUNOS Base OS Software Suite [18.4R1.8]
JUNOS Crypto Software Suite [18.4R1.8]
JUNOS Online Documentation [18.4R1.8]
JUNOS Kernel Software Suite [18.4R1.8]
JUNOS Packet Forwarding Engine Support (qfx-10-f) [18.4R1.8]
JUNOS Routing Software Suite [18.4R1.8]
JUNOS jsd [i386-18.4R1.8-jet-1]
JUNOS SDN Software Suite [18.4R1.8]
JUNOS Enterprise Software Suite [18.4R1.8]
JUNOS Web Management [18.4R1.8]
JUNOS py-base-i386 [18.4R1.8]
JUNOS py-extensions-i386 [18.4R1.8]

Confirm that the RE is connected to the PFE.

cmd
vagrant@vqfx> show chassis fpc
                     Temp  CPU Utilization (%)   CPU Utilization (%)  Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      1min   5min   15min  DRAM (MB) Heap     Buffer
  0  Online           Testing   1         0        0      0      0    1024        0         85
  1  Empty
  2  Empty
  3  Empty
  4  Empty
  5  Empty
  6  Empty
  7  Empty
  8  Empty
  9  Empty

{master:0}
Note
It can take a couple of minutes for the the PFE and RE to establish a connection to each other. Once that is established the revenue ports (xe-0/0/X) will become available.

We have confirmed we can build and login to the box with Vagrant. Lastly, let clean up a bit.

Destroy to running lab.

cmd
vagrant destroy -f

# output

==> sw2-pfe: Removing domain...
==> sw2: Removing domain...
==> sw1-pfe: Removing domain...
==> sw1: Removing domain...

Remove the original downloaded images.

cmd
rm -rf ~/vagrant/boxes/juniper/vqfx/{pfe,re}
rm -f ~/vagrant/boxes/juniper/vqfx/*.qcow2
rm -f ~/vagrant/boxes/juniper/vqfx/*.gz

As a final note, the vQFX has a bunch of interface config (et/xe-0/0/0 - 71) defined. The below config can be used to delete all the interface config after boot. In practice you would use a provisioner to apply this config automatically.

cmd
wildcard range delete interfaces et-0/0/[0-71]
wildcard range delete interfaces xe-0/0/[0-71]
wildcard range delete interfaces xe-0/0/[0-71]:[0-3]

Outro

The Juniper vQFX is a community driven project, its great that Juniper is offering virtual flavors of their devices for testing and training purposes. As with the vMX the vQFX requires two VMs but I think its worth the effort to be able to test the Juniper switching features.