CHARON-VAX for Linux DSSI cluster

CHARON-VAX for Linux DSSI cluster

Table of Contents

Introduction

This section will describe how to configure DSSI cluster in CHARON-VAX for Linux.

General description

The DSSI storage subsystem for the CHARON VAX 4000 106, 108, 700 and 705 models is based on the emulation of "SHAC" host adapters. Routing of SCS cluster information among the emulated "SHAC" host adapters of multiple nodes is done via separate TCP/IP links.

The DSSI storage subsystem is functionally emulated and operates at a much higher throughput than the original hardware. Connection to physical DSSI hardware is neither possible nor planned for future releases.

The current version of DSSI emulation for CHARON-VAX supports up to 3 VAX nodes in a virtual DSSI cluster and handles a maximum cluster size of 8 nodes. A single virtual DSSI network supports up to 256 storage elements.

For more details on DSSI configuration follow this link.

Configuration steps

To create a CHARON-VAX DSSI cluster, the following elements must be configured:

  1. "SHAC" host adapter

  2. "HSD50" storage controller

DSSI hardware topology is emulated by establishing TCP/IP channels between the emulated SHAC host adapters of each CHARON-VAX system. The emulated HSD50 storage controllers are then connected to every SHAC host adapter in the virtual DSSI network.

Cluster operation requires (virtual) disks that are simultaneously accessible by all CHARON-VAX nodes involved. This can be implemented for instance by using a properly configured iSCSI initiator / target structure or a fiber channel storage back-end. Disks on a multiport SCSI switch are not acceptable, as a SCSI switch does not provide true simultaneous access to multiple nodes.

Steps to configure DSSI cluster:

  1. Set unique ID for SHAC controller of each node

    .

  2. Configure preloaded SHAC adapters PAA

    .

  3. Load HSD50 adapter

  4. Set SCS system ID and allocation class

    .

  5. Configure mapping to the system resources

 

It is advisable to start any field test with implementing the cluster examples provided below

Example 1: Dual node DSSI cluster with 4 shared disks

To setup two emulated VAX 4000 Model 108 nodes, we need two host machines, preferably running the same version of Linux. 

Assume that these host systems have network host names CASTOR and POLLUX in the host TCP/IP network.

The following are CHARON-VAX configuration files for the emulated VAX 4000 Model 108 nodes running on CASTOR and POLLUX:

 

CASTOR node

CASTOR node

...
set PAA port[2]=11012 host[2]=”pollux:11021”

load HSD50 DISKS dssi_host=PAA dssi_node_id=3

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="/mnt/share/dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="/mnt/share/dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="/mnt/share/dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="/mnt/share/dua3-rz24-vms-v6.2.vdisk"
...

 

POLLUX node

POLLUX node

...
set PAA port[1]=11021 host[1]=”castor:11012”

load HSD50 DISKS dssi_host=PAA dssi_node_id=3

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="/mnt/share/dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="/mnt/share/dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="/mnt/share/dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="/mnt/share/dua3-rz24-vms-v6.2.vdisk"

...

 

Let's review both configurations step-by-step.

  1. The first line of both configuration files establishes parameters for the preloaded "PAA" SHAC host adapter. Only 2 parameters of SHAC are important for us in this situation: 

    Thus, CASTOR connects to POLLUX's port 11021 and listens for POLLUX's connection on port 11012,  POLLUX connects to CASTOR's port 11012 and listens for CASTOR's connection on port 11021 
     

  2. Second and third lines of both configuration files are for loading "DISKS" HSD50 storage controller and its parametrization:

      

  3. The next lines demonstrate mapping "DISKS" HSD50 storage controller to disk images, shared between both hosts.. A "container" parameter is used for this purpose. This example assumes that all disk images are accessible from both host machines via network share (NFS, SAMBA) or some other realization.



© Stromasys, 1999-2025  - All the information is provided on the best effort basis, and might be changed anytime without notice. Information provided does not mean Stromasys commitment to any features described. 
Need fast, reliable migration? We have done it countless times. Talk to an expert