CHARON-VAX for Windows CI cluster

CHARON-VAX for Windows CI cluster

Table of Contents

Introduction

This section describes how to configure a CHARON-VAX for a Windows CI cluster.

General description

A virtual CIXCD is the functional equivalent of a hardware CIXCD host adapter with the exception that there is no physical layer to connect to a hardware CI infrastructure. The current hardware is much faster than the physical CI implementation, therefore if such a connection were even possible, it would limit the virtual system throughput.

For data storage, the CIXCD connects to one or more virtual HSJ50 controllers that are loaded as separate components in the configuration file. To configure VAX CI clusters, the virtual CIXCDs of the multiple CHARON virtual machines (VMs) are interconnected via TCP/IP links.

It is advisable to start any field test based on the cluster examples provided below

Configuring virtual CI clusters requires many configurable parameters and these parameters need to be the same on all servers.

The current CI implementation for CHARON-VAX/66x0 supports up to 8 VAX VMs in a virtual CI cluster and handles a maximum cluster size of 128 nodes. A single virtual CI network supports up to 256 storage elements.

For more details on CI configuration follow this link.

Configuration steps

To create a CHARON-VAX CI cluster, both of the following elements must be configured:

  1. "CIXCD" host adapter

  2. "HSJ50" storage controller

CI hardware topology is emulated by establishing TCP/IP channels between the emulated CIXCD host adapters of each CHARON VM. The emulated HSJ50 storage controllers are then connected to every CIXCD host adapter in the virtual CI network.

Cluster operations require that (virtual) disks are simultaneously accessible by all CHARON VMs involved. This can be implemented, for instance, by using a properly configured iSCSI initiator / target structure or a fiber channel storage back-end. Disks on a multiport SCSI switch are not acceptable - because a SCSI switch does not provide true simultaneous access to multiple nodes. 

It is advisable to start any field test with implementing the cluster examples provided below

 

Example 1: Dual node CI cluster with 4 shared disks

In this example, setting up two CHARON VMs, two host machines, preferably running the same version of Windows, are required.

Assume that these host systems have network host names CASTOR and POLLUX in the host TCP/IP network.

The following are CHARON VM configuration files for the emulated VAX 6610 nodes running on CASTOR and POLLUX:

CASTOR node

CASTOR node

...

load CIXCD PAA ci_node_id=1

set PAA port[2]=11012 host[2]="pollux:11021"

load HSJ50 DISKS ci_host=PAA ci_node_id=101

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="\\DiskServer\Share\dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="\\DiskServer\Share\dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="\\DiskServer\Share\dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="\\DiskServer\Share\dua3-rz24-vms-v6.2.vdisk"
...

POLLUX node

POLLUX node

...

load CIXCD PAA ci_node_id=2

set PAA port[1]=11021 host[1]="castor:11012"

load HSJ50 DISKS ci_host=PAA ci_node_id=101

set DISKS scs_system_id=3238746238 mscp_allocation_class=1

set DISKS container[0]="\\DiskServer\Share\dua0-rz24-vms-v6.2.vdisk"
set DISKS container[1]="\\DiskServer\Share\dua1-rz24-vms-v6.2.vdisk"
set DISKS container[2]="\\DiskServer\Share\dua2-rz24-vms-v6.2.vdisk"
set DISKS container[3]="\\DiskServer\Share\dua3-rz24-vms-v6.2.vdisk"
... 

Let's review both configurations step-by-step.

  1. The first two lines of both configuration files load and establish parameters for the  "PAA" CIXCD host adapter. Only 3 CIXCD parameters are important for us in this situation:  

    Thus, CASTOR connects to POLLUX's port 11021 and listens for POLLUX's connection on port 11012,  POLLUX connects to CASTOR's port 11012 and listens for CASTOR's connection on port 11021 

     

  2. The third and fourth lines of both configuration file "DISKS" HSJ50 storage controller and its parameters:

  3. The next lines demonstrate the mapping of the "DISKS" HSJ50 storage controller to the disk images, shared between both hosts. A "container" parameter is used for this purpose. This example assumes that all disk images are accessible from both host machines via a network share or some other realization.

Example 2: Triple node CI cluster with multiple iSCSI disks

In this example we assume that all three host systems have an iSCSI initiator and are connected to a common iSCSI server. The iSCSI disk server provides 8 virtual disks with R/W access on all hosts. These disks are configured as "\\.\PhysicalDrive0" ... "\\.\PhysicalDrive7" on each of the host machines.

The storage configuration must be identical on all three nodes, it is recommended to describe the storage structure in a separate configuration file to be included in each CHARON VM configuration file with the "include" instruction (in this example the name of the configuration file set to "disksets.cfg") and store it on a common network share ("\\DiskServer\Share"):

load HSJ50 DISKS1 ci_node_id=4

set DISKS1 scs_system_id=3238746238 mscp_allocation_class=1

set DISKS1 container[1]="\\.\PhysicalDrive0"
set DISKS1 container[2]="\\.\PhysicalDrive1"
set DISKS1 container[3]="\\.\PhysicalDrive2"
set DISKS1 container[4]="\\.\PhysicalDrive3"

load HSJ50 DISKS2 ci_node_id=5

set DISKS2 scs_system_id=1256412654 mscp_allocation_class=2

set DISKS2 container[5]="\\.\PhysicalDrive4"
set DISKS2 container[6]="\\.\PhysicalDrive5"
set DISKS2 container[7]="\\.\PhysicalDrive6"
set DISKS2 container[8]="\\.\PhysicalDrive7"



© Stromasys, 1999-2025  - All the information is provided on the best effort basis, and might be changed anytime without notice. Information provided does not mean Stromasys commitment to any features described. 
Need fast, reliable migration? We have done it countless times. Talk to an expert