Tuesday, November 23, 2021

New Community: Compass L

Do you know Compass L yet...? This community offers a great opportunity to interact with other users, developers and architects of Linux and KVM on IBM Z!

For further information, see the flyer below, or head over right away and join the community here.


Monday, November 22, 2021

RHEL 8.5 AV Released

RHEL 8.5 Advanced Virtualization (AV) is out! See the official announcement and the release notes.

KVM is supported via Advanced Virtualization, and provides

  • QEMU v6.0, supporting virtio-fs on IBM Z
  • libvirt v7.6

Furthermore, RHEL 8.5 AV now supports the possibility to persist mediated devices.

For a detailed list of Linux on Z-specific changes, also see this blog entry at Red Hat.

IBM-specific documentation for Red Hat Enterprise Linux 8.5 is available at IBM Documentation here (in particular: Device Drivers, Features and Commands on Red Hat Enterprise Linux 8.5).
See here on how to enable AV in RHEL 8 installs.

Monday, October 18, 2021

Ubuntu 21.10 released

Ubuntu Server 21.10 is out!
It ships

  • Linux kernel 5.13 (including, among others, features as described here and here)
  • QEMU v6.0
  • libvirt v7.6
See the release notes here, and the blog entry at Canonical with Z-specific highlights here.

Wednesday, October 13, 2021

Documentation Update: KVM Virtual Server Management

Intended for KVM virtual server administrators, the "Linux on Z and LinuxONE - KVM Virtual Server Management" book illustrates how to set up, configure, and operate Linux on KVM instances and their virtual devices running on the KVM host and IBM Z hardware.

This major update includes libvirt commands and XML elements for managing the lifecycle of VFIO mediated devices, performance tuning tips, and a simplified method for configuring a virtual server for IBM Secure Execution for Linux.

Friday, July 30, 2021

qeth Devices: Promiscuous Modes, Live Guest Migration, and more

qeth devices, namely OSA-Express and HiperSockets, have a vast array of functionalities that is easy to get lost in. This entry illustrates some of the most commonly sought functionalities, while trying to avoid confusing the reader with too much background information.

TLDR:

  • IBM z14: For KVM, always use separate OSA devices on source and target for LGM; For OVS, use a primary bridgeport with OSA, and VNIC characteristics with HiperSockets.
  • IBM z15: For KVM with OVS, use VNIC characteristics for any qeth device; for KVM and MacVTap, use VNIC characteristics if you want to use the same device on source and target system in LGM scenarios.

 

Bridgeport Mode

Initially, the only way to enable promiscuous mode on OSA-Express adapters and HiperSockets was through the so-called bridgeport mode. The concept of the bridgeport mode distinguishes between between ports as follows:

  • Primary bridgeport: The primary bridgeport receives all traffic for all destination addresses unknown to the device. Or, in other words: If the device receives data for a destination unknown to it, instead of dropping it, it will be forwarded to the current primary bridgeport instead. Which further implies that as soon as an operating system registers a MAC address with the device, traffic destined for that MAC address becomes "invisible" to the bridgeport.
    Note: Only a single operating system can use the primary bridgeport on an adapter at any time.
  • Secondary bridgeport: Whenever the operating system that currently has the primary bridgeport gives up on it, one of the secondary bridgeports will become the new primary. An arbitrary number of operating systems can register as a secondary bridgeport.

Bridgeport mode is available in Layer 2 mode only. Furthermore, HiperSockets devices need to be defined as external-bridged in IOCDS.
Use attributes in the device's sysfs directory as follows:

  • bridge_role: Set the desired role (none, primary, or secondary), and query the current one.
  • bridge_state: Query the current state (active or inactive)

Bridgeport mode effectively provides a promiscuous mode. But note that in addition to enabling the primary bridgeport mode, the respective interface has to have the promiscuous mode set, still!
All in all, here is how usage of this feature typically looks like:

  $ echo primary >/sys/devices/qeth/0.0.bd00/bridge_role

  # verify that we got primary bridgeport, not secondary, and are active:

  $ cat /sys/devices/qeth/0.0.bd00/bridge_state 
  active
  $ cat /sys/devices/qeth/0.0.bd00/bridge_role

  primary

  # enable promiscuous mode on the interface
  $ ip link set <interface> promisc on

The downside of this approach is that only a single operating system per device can enable the primary bridgeport mode, which scales only that far. Therefore, something better, with more functionality was introduced to the platform.


VNIC Characteristics

Introduced with IBM z14/LinuxONE II for HiperSockets, and IBM z15/LinuxONE III for OSA, the VNIC characteristics feature provides promiscuous mode for multiple operating systems attached to the same device, and provides additional functionality which can be very handy especially with KVM.
The VNIC characteristics can be controlled through a number of attributes located in an extra subdirectory called vnicc in the device's sysfs directory.

Let us focus on two main functionalities.

Promiscuous Mode

Technically, VNIC does not provide a traditional promiscuous mode (just like bridgeport mode did not in the literal sense), but rather emulates a self-learning switch. However, for users looking for a promiscuous mode that is usable in conjunction with a Linux bridge or an Open vSwitch, the end-result is the same.

To activate, set the attributes as follows:

  echo 1>/sys/devices/qeth/0.0.bd00/vnicc/flooding
  echo 1>/sys/devices/qeth/0.0.bd00/vnicc/mcast_flooding
  echo 1>/sys/devices/qeth/0.0.bd00/vnicc/learning

Again, in addition to enabling the promiscuous mode on the device, the respective interface has to have the promiscuous mode set, still:

  ip link set <interface> promisc on


KVM Live Guest Migration

Providing connectivity to virtual servers running in KVM, administrators have two choices to provide connectivity:

  • Via Open vSwitch: Requires a promiscuous mode, see above. Virtual servers migrated between the two Open vSwitches will have uninterrupted connectivity thanks to the devices being configure in promiscuous mode, provided that the networking architecture is set up accordingly. The two Open vSwitches may or may not share the same networking device.
  • Via MAC Address Takeover: This is only required in case both, the source and the target KVM host share the same device and use MacVTap to connect to it. While the traffic will still run through the same device, some handshaking has to take place to make sure that the MAC address is configured correctly, and traffic forwared to the target KVM host once migration has completed. This has to be authorized - otherwise, an attacker could divert traffic elsewhere.

Luckily, VNIC characteristics offers functionality for MAC address takeover, too. To enable, set the VNIC characteristics as follows:

On the source KVM host:

  echo 1>/sys/devices/qeth/0.0.bd00/vnicc/takeover_learning

On the target KVM host:

  echo 1>/sys/devices/qeth/0.0.bd00/vnicc/takeover_setvmac


Final Words

Note that bridgeport mode and VNIC characteristics are mutually exclusive! Meaning as soon as e.g. a single VNIC characteristics-related attribute is activated, bridgeport-related functionality is not available anymore until that VNIC-characteristic is disabled again.

Furthermore, check your Linux distribution's tools on how to persist the changes outlined above. On many distros, chzdev (comes with the s390-tools package) does the job, but not (yet) on all.

This article only provides a brief overview. Both, promiscuous mode and the VNIC characteristics have a lot more to it than what was covered in this brief overview, which merely aims to provide just enough information to get readers started with the most common usecases. For a deeper understanding, check the respective sections in the Device Drivers, Features, and Commands book.

Wednesday, June 30, 2021

Webinar: 2021 Linux on IBM Z and LinuxONE Technical Client Workshop

Join us for the 2021 Linux on IBM Z and LinuxONE Virtual Client Workshop!

Abstract

Get the latest news about the Linux exploitation and advantages of the IBM Z and LinuxONE platform in this technical workshop. Presented by our developers and solution architects, the training focuses on the latest news and technical information for Linux on IBM Z, LinuxONE, z/VM, and KVM, such as Red Hat OpenShift Container Platform, Red Hat OpenShift Container Storage, Security, Performance, Networking and Virtualization. You will have the chance to interact directly with IBM developers and solution experts during the event, especially in the interactive workgroup sessions, which will be held on the last day.

This workshop is free of charge.

Agenda Highlights
  • What's New on RHOCP on IBM Z & LinuxONE 
  • Hybrid Cloud and why RHOCP on IBM Z & LinuxONE can enable highest flexibility
  • Introduction of Red Hat OpenShift Container Storage
  • Red Hat OpenShift Container Platform on IBM Z & LinuxONE: CPU Consumption Demystified
  • Cloud Ready Development, can now profit from multi Architecture capabilities and several features in RHOCP on IBM Z
  • FUJITSU Enterprise Postgres: Finally! An OCP-certified Database for Linux on IBM Z and LinuxONE that exploits our hardware capabilities
  • Reduce your IT costs with IBM LinuxONEHow IBM Cloud Paks drive business value and lower IT costs
  • z/VM Platform Update
  • Linux and KVM on IBM Z and LinuxONE - What's New
  • kdump - Recommendations for Linux on IBM Z and LinuxONE
  • Elasticsearch on IBM Z - Performance Experiences, Hints and Tips
  • Crypto Update
  • Fully homomorphic encryption Introduction and Update
  • Putting SMC-Dv2 to work
  • Java on IBM Z - News, Updates, and other Pulp Fiction
  • Various workgroup sessions

Schedules & Registration

Americas, Europe, Middle East & Africa
July 12-16, every day 8:30 - 11:30 AM EST / 14:30 - 17:30 CET
Register here.

Asia Pacific
July 27-29, 2021, every day 8:30 - 11:30 AM CET / 2:30 - 5:30 PM Singapore time
Register here.

Friday, June 25, 2021

SLES 15 SP3 Released

SUSE Linux Enterprise Server 15 SP3 is out! See the official announcement and the release notes. It provides

  • QEMU v5.2, supporting virtio-fs on IBM Z
  • libvirt v7.1
For a detailed list of IBM Z and LinuxONE-specific (non-KVM) features see here.

Monday, May 31, 2021

RHEL 8.4 Released

RHEL 8.4 is out! See the official announcement and the release notes.

KVM is supported via Advanced Virtualization, and provides

  • QEMU v5.2, supporting virtio-fs on IBM Z
  • libvirt v7.0

Furthermore, RHEL 8.4 now supports graphical installation for guest installs. Just add the highlighted arguments to your virt-install command line for an RHEL 8.4 install on an RHEL 8.4 KVM host:

    virt-install --input keyboard,bus=virtio --input mouse,bus=virtio \
    --graphics vnc --video virtio
--disk size=8 --memory 2048 --name rhel84 \
    --cdrom /var/lib/libvirt/images/RHEL-8.4.0-20210503.1-s390x-dvd1.iso

And the installation will enter the fancy graphical installer:

Make sure to have package virt-viewer installed on the host, and X forwarding enabled (option -X for ssh).

This new support also allows graphical installs started in cockpit:

Tuesday, April 6, 2021

Webinar: Red Hat OpenShift for IBM Z and LinuxONE on RHEL 8.3 KVM

Join us for our webinar on Wednesday, April 21, 11:00 AM - 12:00 PM EST!

Abstract

Red Hat OpenShift is available on RHEL 8.3 KVM starting with Red Hat OpenShift version 4.7 on IBM Z and LinuxONE. We discuss the deployment of a Red Hat OpenShift Cluster on RHEL KVM from a high-level perspective, including supported configurations and requirements, especially the available network and storage options.
Furthermore, we explain the installation steps of Red Hat OpenShift 4.7 on RHEL KVM in detail, including best practices and a short excursion on cluster debugging.

Speakers

  • Dr. Wolfgang Voesch, Iteration Manager - OpenShift on IBM Z and LinuxONE
  • Holger Wolf, Product Owner - OpenShift on Linux on IBM Z and LinuxONE

Registration

Register here. You can check the system requirements here.
After registering, you will receive a confirmation email containing information about joining the webinar.

Replay & Archive

All sessions are recorded. For the archive as well as a replay and handout of this session and all previous webinars see here.

Wednesday, March 24, 2021

Installing Red Hat OpenShift on KVM on Z

While there is no documentation on how to install Red Hat OCP on Linux on Z with a static IP under KVM today, the instructions here will get you almost there. However, there are a few parts within section Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines that require attention. Here is an updated version that will get you through:
 
4. You can use an empty QCOW2 image: Using the prepared one will also work, but it will be overwritten anyway.

5. Start the guest with the following modified command-line:
  $ virt-install --noautoconsole
     --boot kernel=/bootkvm/rhcos-4.7.0-s390x-live-kernel-s390x, \
       initrd=/bootkvm/rhcos-4.7.0-s390x-live-initramfs.s390x.img, \
           kernel_args='rd.neednet=1 dfltcc=off coreos.inst.install_dev=/dev/vda
       coreos.live.rootfs_url=https://mirror.openshift.com \
           /pub/openshift-v4/s390x/dependencies/rhcos/4.7/4.7.0 \
           /rhcos-4.7.0-s390x-live-rootfs.s390x.img
       coreos.inst.ignition_url=http://
192.168.5.106:8080/ignition \
       /bootstrap.ign ip=192.168.5.11::
192.168.5.1:24:bootstrap-0.pok-241-macvtap- \
           mars.com::none
       nameserver=9.1.1.1'
     --connect qemu:///system
     --name bootstrap-0
     --memory 16384
     --vcpus 8
     --disk /home/libvirt/images/bootstrap-0.qcow2
     --accelerate
     --import
     --network network=macvtap-mv1
     --qemu-commandline="-drive if=none,id=ignition,format=raw,file=/bootkvm \
           /bootstrap.ign,readonly=on -device virtio-blk, \
           serial=ignition,drive=ignition"

Note the following changes:

  • Use the live installer kernel, initrd (you can get them from the redhat mirror) and parmline (this you need to create yourself once for each guest) in the --boot parameter. This is basically like installing on z/VM, and will write the image to your QCOW2 image with the correct static IP configuration. Keep in mind that the ignition file needs to be provided by an http/s server for this method to work
  • dfltcc=off is required for IBM z15 and LinuxONE III

6. To restart the guest later on, you will need to change the guest definition to boot from the QCOW2 image.
When the kernel parms are passed into the installer, the domain xml will look like this once the guest is installed and running:
  <os>
    <type arch='s390x' machine='s390-ccw-virtio-rhel8.2.0'>hvm</type>
    <kernel>/bootkvm/rhcos-4.7.0-s390x-live-kernel-s390x</kernel>
    <initrd>/bootkvm/rhcos-4.7.0-s390x-live-initramfs.s390x.img</initrd>
    <cmdline>rd.neednet=1 dfltcc=off coreos.inst.install_dev=/dev/vda
        coreos.live.rootfs_url=https://mirror.openshift.com/pub/openshift-v4/ \
             s390x/dependencies/rhcos/4.7/4.7.0/rhcos-4.7.0-s390x-live- \
             rootfs.s390x.img
        coreos.inst.ignition_url=http://
192.168.5.106:8080/ignition/worker.ign
        ip=
192.168.5.49::192.168.5.1:24:worker-1.pok-241-macvtap- \
             mars.com::none nameserver=1.1.1.1</cmdline>
    <boot dev='hd'/>
  </os>

However, this domain XML still points at the installation media, hence a reboot will not work (it will merely restart the installation).
Remove the <kernel>, <initrd>, <cmdline> elements, so that all that is left is the following:
  <os>
    <type arch='s390x' machine='s390-ccw-virtio-rhel8.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>

With this, the guest will start successfully.

 [Content contributed by Alexander Klein]

Wednesday, February 24, 2021

Red Hat OpenShift Cotainer Platform 4.7 Released

Red Hat OCP 4.7 is out!

Among others, it adds support for KVM on Z as provided by RHEL 8.3 as the hypervisor for user-provisioned infrastructure.

See here for the full list of IBM Z-specific changes and improvements.

Tuesday, January 5, 2021

QEMU v5.2 released [UPDATE Feb 24, 2021]

QEMU v5.2 is out. A highlight from a KVM on Z perspective:

  • PCI passthrough support now includes any PCI devices other than RoCE Express cards, e.g. including NVME devices. However, ISM devices as needed for SMC-D, require extra support an cannot be used at this point.
  • virtiofs support vi virtio-fs-ccw: Shared Filesystem allowing KVM guests to access host directories.
    Use cases:
    • Container image access in lightweight VMs (e.g. in Kata Containers)
    • CI/CD and development enablement
    • Filesystem as a service, to easily switch backends
    To use, define in the host as follows:
      <domain>
        <memoryBacking>
          <access mode='shared'/>
        </memoryBacking>
        <devices>
          <filesystem type='mount'
                accessmode='passthrough'>
            <driver type='virtiofs'/>
            <source dir='/<hostpath>'/>
            <target dir='mount_tag'/>
          </filesystem>
          ...
        </devices>
        ...
      </domain>

    Then mount in guests as follows:
      # mount -t virtiofs mount_tag /mnt/<path>
    Requires Linux kernel 5.4 and libvirt v7.0.

For further details, see the Release Notes.

UPDATE: A previous version had falsely listed ISM devices as supported.