CHARON-VAX for Linux CI cluster


Product Documentation and Knowledge Base - Home 


Charon-VAXCharon-AXPCharon-PDPCharon-SSPCharon-PAR

CHARON-VAX for Linux CI cluster

Table of Contents

Introduction

This section describes how to configure a CHARON-VAX for Linux CI cluster.

General description

The virtual CIXCD is the functional equivalent of a hardware CIXCD host adapter, with the exception that there is no physical layer to connect to a hardware CI infrastructure. Since the current host hardware is much faster than the physical CI implementation, such a connection - if it were possible - would greatly limit the virtual system throughput.

For data storage, the CIXCD connects to one or more virtual HSJ50 controllers that are loaded as a separate components in the configuration file. To configure VAX CI clusters, the virtual CIXCDs of the multiple CHARON-VAX/66X0 instances are interconnected via TCP/IP links.

It is advisable to start any field test based on the cluster examples provided below

Configuring (large) virtual VAX CI clusters requires many configurable parameters and a replicated identical definition of the shared virtual HSJ50 storage controllers in each virtual VAX instance.

The current CI implementation for CHARON-VAX/66x0 supports up to 8 VAX nodes in a virtual CI cluster and handles a maximum cluster size of 128 nodes. A single virtual CI network supports up to 256 storage elements.

For more details on CI configuration follow this link.

Configuration steps

To create CHARON-VAX CI cluster, both of the two elements must be configured:

  1. "CIXCD" host adapter

  2. "HSJ50" storage controller

CI hardware topology is emulated by establishing TCP/IP channels between the emulated CIXCD host adapters of each CHARON-VAX system. The emulated HSJ50 storage controllers are then connected to every CIXCD host adapter in the virtual CI network.

Cluster operation requires (virtual) disks that are simultaneously accessible by all CHARON-VAX nodes involved. This can be implemented for instance by using a properly configured iSCSI initiator / target structure or a fiber channel storage back-end. Disks on a multiport SCSI switch are not acceptable, as a SCSI switch does not provide true simultaneous access to multiple nodes. 

It is advisable to start any field test with implementing the cluster examples provided below

 

Example 1: Dual node CI cluster with 4 shared disks 

To setup two emulated VAX 6610 nodes, we need two host machines, preferably running the same version of Linux.  

Assume that these host systems have network host names CASTOR and POLLUX in the host TCP/IP network.

The following are CHARON-VAX configuration files for the emulated VAX 6610 nodes running on CASTOR and POLLUX:

 

CASTOR node

CASTOR node

...
load CIXCD PAA ci_node_id=1

set PAA port[2]=11012 host[2]=”pollux:11021”

load HSJ50 DISKS ci_host=PAA ci_node_id=101

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="/mnt/share/dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="/mnt/share/dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="/mnt/share/dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="/mnt/share/dua3-rz24-vms-v6.2.vdisk"
...

 

POLLUX node

POLLUX node

...
load CIXCD PAA ci_node_id=2

set PAA port[1]=11021 host[1]=”castor:11012”

load HSJ50 DISKS ci_host=PAA ci_node_id=101

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="/mnt/share/dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="/mnt/share/dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="/mnt/share/dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="/mnt/share/dua3-rz24-vms-v6.2.vdisk"
...

 

 Let's review both configurations step-by-step.

  1. The first two lines of both configuration files load and establish parameters for the  "PAA" CIXCD host adapter. Only 3 CIXCD parameters are important for us in this situation:  

    Thus, CASTOR connects to POLLUX's port 11021 and listens for POLLUX's connection on port 11012,  POLLUX connects to CASTOR's port 11012 and listens for CASTOR's connection on port 11021 

  2. The third and fourth lines of both configuration file "DISKS" HSJ50 storage controller and its parameters:

       

      

  3. The next lines demonstrate mapping the "DISKS" HSJ50 storage controller to disk images, shared between both hosts. A "container" parameter is used for this purpose. This example assumes that all disk images are accessible from both host machines via network share (NFS, SAMBA) or some other realization.



© Stromasys, 1999-2025  - All the information is provided on the best effort basis, and might be changed anytime without notice. Information provided does not mean Stromasys commitment to any features described. 
Need fast, reliable migration? We have done it countless times. Talk to an expert