CHARON-VAX for Linux CI cluster
Table of Contents
Introduction
This section describes how to configure a CHARON-VAX for Linux CI cluster.
General description
The virtual CIXCD is the functional equivalent of a hardware CIXCD host adapter, with the exception that there is no physical layer to connect to a hardware CI infrastructure. Since the current host hardware is much faster than the physical CI implementation, such a connection - if it were possible - would greatly limit the virtual system throughput.
For data storage, the CIXCD connects to one or more virtual HSJ50 controllers that are loaded as a separate components in the configuration file. To configure VAX CI clusters, the virtual CIXCDs of the multiple CHARON-VAX/66X0 instances are interconnected via TCP/IP links.
It is advisable to start any field test based on the cluster examples provided below
Configuring (large) virtual VAX CI clusters requires many configurable parameters and a replicated identical definition of the shared virtual HSJ50 storage controllers in each virtual VAX instance.
The current CI implementation for CHARON-VAX/66x0 supports up to 8 VAX nodes in a virtual CI cluster and handles a maximum cluster size of 128 nodes. A single virtual CI network supports up to 256 storage elements.
For more details on CI configuration follow this link.
Configuration steps
To create CHARON-VAX CI cluster, both of the two elements must be configured:
"CIXCD" host adapter
"HSJ50" storage controller
CI hardware topology is emulated by establishing TCP/IP channels between the emulated CIXCD host adapters of each CHARON-VAX system. The emulated HSJ50 storage controllers are then connected to every CIXCD host adapter in the virtual CI network.
Cluster operation requires (virtual) disks that are simultaneously accessible by all CHARON-VAX nodes involved. This can be implemented for instance by using a properly configured iSCSI initiator / target structure or a fiber channel storage back-end. Disks on a multiport SCSI switch are not acceptable, as a SCSI switch does not provide true simultaneous access to multiple nodes.
It is advisable to start any field test with implementing the cluster examples provided below
Example 1: Dual node CI cluster with 4 shared disks
To setup two emulated VAX 6610 nodes, we need two host machines, preferably running the same version of Linux.
Assume that these host systems have network host names CASTOR and POLLUX in the host TCP/IP network.
The following are CHARON-VAX configuration files for the emulated VAX 6610 nodes running on CASTOR and POLLUX:
CASTOR node |
|---|
|
POLLUX node |
|---|
|
Let's review both configurations step-by-step.
The first two lines of both configuration files load and establish parameters for the "PAA" CIXCD host adapter. Only 3 CIXCD parameters are important for us in this situation:
Thus, CASTOR connects to POLLUX's port 11021 and listens for POLLUX's connection on port 11012, POLLUX connects to CASTOR's port 11012 and listens for CASTOR's connection on port 11021
The third and fourth lines of both configuration file "DISKS" HSJ50 storage controller and its parameters:
The next lines demonstrate mapping the "DISKS" HSJ50 storage controller to disk images, shared between both hosts. A "container" parameter is used for this purpose. This example assumes that all disk images are accessible from both host machines via network share (NFS, SAMBA) or some other realization.
© Stromasys, 1999-2025 - All the information is provided on the best effort basis, and might be changed anytime without notice. Information provided does not mean Stromasys commitment to any features described.
Need fast, reliable migration? We have done it countless times. Talk to an expert