Contents & Introduction


This page started life as an HTML coding challenge. The HTML source code for this site is maintained on a VAX/VMS system with the aid of the EDT editor. A small FTP script copies the text to the ZONNET server. The result is checked with the Mozilla browser that runs on an Alpha/VMS system. Netscape version 3.03 for VAX/VMS is still available but its functionality is very limited to the extent of being unpractical to use. The alternative browser for all VMS platforms is Mosaic V3.7. It builds smoothly and runs reasonably well, even on a VAXstation 4000-90A (admittedly with 128 MB memory).


The collection is part of HECnet, an international network set up with systems that run an operating system which supports the DECnet protocol. DECnet is carried over the Internet either encapsulated in IP tunnels or by means of a nifty program that emulates a LANbridge 100. The program is written and maintained by Johnny Billquist.



next


The openVMS Operating System

VMS, the prefix "open" is silent, is an operating system that runs on three hardware architectures. The operating system became commercially available in 1978 and was sold by Digital Equipment Corporation. VMS shows its heritage to other operating systems from that same company, most notably RSX-11 and TOPS-10. The first hardware it ran on was the VAX processor, a family of minicomputer systems produced between 1978 and 2002. The slowest systems, like the microVAX I and the VAX 11/730 uniprocessors ran at 0.3 VUPS (VAX Units of Processor Speed). The fastest uniprocessor machines ran at 60 VUPS. In 1992 VMS was ported to the 64 bit AXP, or Alpha, architecture. Each new model in the Alpha family was the the fastest processor available on this planet when it was launched. The technical development of Alpha was stopped in 2001. By then Digital no longer existed, it was bought by Compaq in 1998. The Alpha is still in production however and will remain that way until 2007. Hewlett Packard currently owns the |d|i|g|i|t|a|l| heritage. After that it is expected to be superceded by Intel's 64 bit processor family.
I64/VMS was booted succesfully for the first time on 31 January 2003. The Interex conference held in Amsterdam (May 17-21) had a live, mixed architecture Alpha and I64 VMScluster running. A clear indication that the VMS port is still well on its way. According to the VMS roadmap AXP/VMS and I64/VMS will have the same functionality as soon as VMS V8.2 is released which is expected mid 2004. Incidentally, a mixed VAX, Alpha and I64 cluster is not a supported configuration. However, since mixed VAX and AXP clusters are supported as well as AXP and I64 clusters, it did not come as a big surprise that a tri-architecture cluster did work. It is just not supported, like VMS clusters that exceed 96 nodes.


up
next


AXP/VMS

Digital Server 5305

The system on the right hand side is a Digital Server 5305. The main processor is a 64-bit Alpha that runs at 532MHz. Originally marketed to run only Windows NT, this system was modified somewhat and now happily runs VMS V7.3 and a couple of layered products covered by licenses under the VMS Hobbyist program. The 5305 is internally the same as an Alpha Server 1200. It was the first Alpha in my collection. Just below the CD-ROM drive it is labeled AlphaPowered; a nice touch.


Digital Server 3000 In August 2003 a second Alpha was added to the collection. Again a "white box" system as the picture on the left shows, more specifically a Digital Server 3000. It came with the EV5 400 MHz processor and 256 MB main memory. The 3000 is the equivalent of the AlphaServer 800. Just like the 5305 it may be modified to run both VMS and Tru64. That modification is reported to work as well on the high end DS7305 (a.k.a. the AlphaServer 4100).

Compaq Professional Workstation

The third Alpha is a Compaq XP1000 Professional Workstation, with an EV6 processor that runs at 500 MHz and 512 MB main memory. It has both SCSI and IDE disks. IDE is considerably slower than SCSI but offers cheap large storage space.

Compared to the VAX, let alone the PDP-11, the Alpha processor shone brightly but with a limited lifespan. The first VAX was sold in 1978 and the last models left the factories in 2002. Remember that Compaq shortened the VAX production period with two years so the PDP-11 will actually outlive its 32 successor because the hardware and operating systems are still produced and maintained by Mentec. Alpha's twelve year stay in product brochures is thus relatively short lived. This process is probably accelarated by HP's acquisition of Compaq. HP decided to end its own proprietary platform development a couple of years earlier and started a technological alliance with Intel which led to the development of the IA64 architecture. While buying Compaq made sense for more than one reason, Alpha was one of the pieces that actually went against HP's technological strategy. The decision to drop Alpha makes sense that way. Note that the EV7 Alpha outperforms current Itanium2 platforms with ease and it is reasonable to assume that HP was not entirely happy with that situation and wanted to remove this embarrassment as quickly as possible from its catalogs. And that was just the EV78 design; the performance data of the improved EV8 will never be available.

Itanium, flaws and all, imaginary or not, is a good thing for VMS. Itanium systems are designed to use standard, commonly available parts. This means that the hardware that serves VMS is competitively prices. Compare that with the VAX, it uses proprietary memory which is dearly expensive, even today. Now VAX hardware is very reliable and performs well but is dearly expensive owing to its proprietary design. Itanium is intended for large scale marketing and uses standard hardware interfaces. Only the last VAX systems had hardware support for widely used standard SCSI peripherals. DEC designed and built peripherals were reliable but hardly outperformed the competition. Alpha changed all that, it uses standard memory and peripherals and its keyboard, mouse and monitor is standard PC issue. Build quality may have suffered but prices went down considerably.

Itanium takes that one step further. Each Alpha cpu sold had to pay back the huge investments made in the design. The problem is that neither DEC nor Compaq were ever able to sell enough of them. The Itanium will sell into high volume markets and those will pay back the development costs. So VMS will ride on a much more cost effective platform. This even reflects in design decisions made for the I64/VMS port. The VMS compilers produce object files that follow the ELF format that is also used by open source compilers. That offers interesting possibilities to port software between operating systems that support that format. In fact, this feature enabled openVMS engineering to write and debug VMS components on Linux well before VMS itself ran on the Itanium 2000.

Most of the text about Itanium was written in early 2004. Whether Itanium will be a succes remains to be seen. HP has dropped plans for Itanium based workstations and sales of Itanium equiped servers is low. Alpha may have been ahead of its time, Itanium is heading on a collision course with an iceberg, so it seems.
In April 2010 Microsoft announced that Windows 2008 Server R2 will be the last supported version for Itanium. Microsoft never had more than a 5% market share of the Itanium business so it is obvious what has driven this decision. Which means that HP carries the burden alone of keeping Itanium alive.


up


next


VAX/VMS

The VAX is a 32 bit architecture. The main reason for the development of a 32 bit system by Digital was memory address space and not arithmetic precision. Digital already had a processor to handle computations, the 36 bit PDP-10 and DECSYSTEM-20 series. The VAX can do that too, it has instructions for 64 and 128 bit datastructures, but it uses all bits for addressing memory so it can handle up to 4 Gigabytes of main memory. The PDP-10 did not even come close, though designs that would have come close were killed once VAX proved to be a commercial success. Only the PDP-11 remained in the catalog together with the VAX. In fact the PDP-11 outlives the far more powerful PDP-10, VAX and AXP models. The processor design is now owned by Mentec and still under active development. Mentec also maintains PDP-11 operating systems: RT-11 and RSX-11 and owns the rights to others, like RSTS/E.



VAXstation 4000-90; JPG courtesy Bjorn Berg

The picture shows the business end of a VAXstation 4000-90A. This is the main box without a monitor, keyboard and mouse or other peripherals attached. It looks exactly the same but its master clock runs at 83 MHz instead of the 71 MHz of the model 90. It runs VAX/VMS V7.3, covered by the same Montagar license mechanism as the Alpha's.



A shared SCSI bus VAXcluster

Building clusters is a particularly interesting topic, because it allows for a degree of availability that is unique to VMS. This anecdote is focused on the type of interconnect that was used in a VAXcluster. VAX systems generally use the Cluster Interconnect bus, a prorietary bus called DSSI or networks like ethernet and FDDI. The Alpha systems added ATM, gigabit ethernet, the SCSI bus, fiber channel and WAN backbones.
Now the VAX also supports SCSI, albeit just SCSI-2. The question was not whether it would work because SCSI-2 has no support for tagged command queuing, rather than if something would break. So how would this unlikely, unsupported setup behave?

A cluster was set up with a VAXstation 3100-M48 and a microVAX 3100-10e. The first point of interest is SCSI id's. These must be unique and DEC assigns id. 6 to the controller (of a VAX) by default. The controller of the 10e was assigned id. 7.
The second issue was that both systems had to use the same SCSI bus, in this case the second bus, because both the bus and its peripherals must have the same name on all cluster members. The two VAX systems were connected to an expansion cabinet with one 200 MB RZ24 disk. This disk had the device name DKB300: on both VAX systems, it was the intended system disk.

The 10e was booted first to install VMS V7.3 on the RZ24. The only layered product installed was DECnet phase IV. The CLUSTER_CONFIG utility was used to prepare the 10e as a cluster member and next to create a second systemroot on the same system disk. The 10e was used to configure both system roots and to install the required licenses. The SYSGEN parameters that must be different on both systems were: SCSNODE, SCSSYSTEMIDH. SYSGEN parameters that were modifified and had to be the same on both nodes were: ALLOCLASS, INTERCONNECT and BOOTNODE. The latter was set to "N".

The 10e was shut down and the next step was to boot the M48 from its own proper root. Care had to be taken to specify the correct boot command to select the desired systemroot. The boot sequence was >>> B/10000000 dkb300 (reminded me of CI clusters) and VMS complained about a missing CXX library and a missing swapfile. Perhaps other files as well but that was all I could see on the console. MODPARAMS was copied from [sys0.sysexe] to [sys1.sysexe] and modified for the 3100-M48. Only two parameters were different, of course they occurred twice in MODPARAMS. Next autogen was run, with the parameters getdata, shutdown and nofeedback. The swapfile and pagefile were missing and were created with sys$update:swapfiles, followed by a reboot. The errors were gone.

Now came the interesting part: what would happen when the 10e was booted as well? At this point the 3100-10e was booted into the cluster. The result was quite amazing because the 3100-10e booted without a problem. Only when the 3100-10e console was touched the problems started: the system disk went in mount/dismount state as soon as both systems tried to access the system disk at the same time. The same happened when a second disk was mounted as data disk with the /cluster qualifier. Each system could use that disk without a problem as long as the other system kept quiet. As soon as both systems requires access simultaneously to the same disk the hardware error count went up. At three to four errors per disk IO the error count for DKB300 went over 160 in a period of 15 minutes.

Conclusions:
1 This is not a useful configuration, as predicted by Hoff Hoffman and others from DEC engineering.
2 There is no concurrent use of disks.
3 Even a single user can force the systems to drop the cluster channel and thus into a cluster state transition.
4 You'd need a quorum disk on the shared bus to survive the loss of one computer, but shared access won't work.
5 The SCSI bus reports a huge number of errors in just a short time indicating that the device driver is not really happy with this configuration.

Other than the conclusion that it obviously did not work there is IMHO an important issue: none disks got corrupted. Neither the data disk nor even the system disk got corrupted during the experiment, short and limited as it was. I more or less expected to do a restore on the data disk but that proved unnecessary. The cluster was dissolved and each system booted from its own proper system disk and the shared disk had no data integrity problems and neither did the RZ24 that was used for system disk. An analyze/error on both disks produced no errors.
The important conclusion is that if you value your data then VMS is your friend.


up
next openVMS hobbyist logo





My collection

There are 21 VAXes (or VAXen :), 16 AXP's (or Alpha's) and one Integrity (or IA64) in my collection. All these system are in working condition. Two VAXstations have a broken graphics board though, but that does not affect their operation. Most of them run VMS, though one VAX system can also boot Ultrix-32 V4.4 and two Alphas run Tru64.
I did try netBSD on VAX and Alpha, as well as Windows NT 4.0 and Linux on Alpha. None of them were used much and the diskspace returned to Files-11.
The first computer system that entered our home was a VAXstation 2000. That was in 1988 and it ran VMS V4.7 combined with the VMS Workstation Software (VWS). The performance was reasonable, especially compared to the competition which at that time was the 80286. In 1996 and 1997 the VAXstations 3100 arrived. Especially the Model 48 was a huge improvement in performance compared to the 2000.

The VAXstation 4000 systems arrived between July 2001 and February 2002. There were three of them, one VAXstation 4000-60 and two model 90A's. The VAXstation 4000-90A has a basic clock frequency of 83 MHz. These systems are among the fastest VAXes ever built and remain useful machines, even today. Only the VAX 4705A, the 6600 and some 4100's are faster. In fact my daughters still play games on the VAXstations 4000-90A!
The VAXstations had been powered off since 1999 when they were given to me (thanks Marcel!). The model 60 and the second 90A were switched on and booted without a hitch. That is Digital build quality for you. The first 90A seemed to suffer from an identity crisis: it would boot but the processor type was not recognized by VMS. In turn VMS can't find a proper entry for it in the license management table, so VMS has no way of determining the appropriate license points or license type. Hence no license for VMS nor its layered products was loaded. Which makes the system next to useless other than for running unlicensed software on the console, which is no fun at all.
Once the second 90A arrived, a few months later, I compared the hardware configuration and while doing so removed several components from the malfunctioning unit. At the same time the system was cleaned. Memory and disks are easily removed and swapped to and from the second, functional 90A. There was no obvious indication of a faulty component, the problem stayed with the second machine. So I put it back together and without much enthousiasm fired it up. To my delight it did boot without any error messages and behaved flawlessly ever since.
The video controller was apparently improperly connected to the mainboard. That may have been caused by dust, perhaps it had been handled a little too hard. If the video controller is removed from a VAXstation 4000-90A then VMS is no longer able to identify the remaining hardware. VMS will run provided that the processor is still running but is unable to figure out the hardware model. Consequently it cannot find a valid entry in the license management database to figure out the appropriate license charges. As a result, no licenses are loaded, not even those with 0 points, and the system is effectively useless though you can logon at the console port (OPA0:) of course. This means that if the video controller breaks down, as one of them did, and there's no replacement available then the broken unit is better not removed.

The model 60 and the 90A are very similar machines on the outside. Same box, the main boards have the same size and the systems even share certain components such as memory boards and video controllers. Remove the graphics controller from a VAXstation 4000-60, push switch S3 (located on the front panel) in the up position, connect an MMJ console cable on the serial port to a console and the 60 will work. VMS will recognize it correctly as a VAXstation 4000-60. A VAXstation 4000-90A without a graphics controller looses its identity so to speak. VMS will boot but fail to recognize the hardware it runs on. So neither VMS nor the installed layered products will find an entry in the license table and will not load, not even those with 0 license units. One of my 90A's has a bad graphics controller. Luckily it does not hang the hardware so the system will still boot and run VMS. The annoying thing is that it does need a serial console because the hardware failure won't let let it boot automatically; unless the console paramter FBOOT is set to 1. The documentation learns that FBOOT speeds up the boot process. A side effect (undocumented AFAIK) is that hardware errors are ignored and the system will attempt to boot automatically. Provided that the HALT parameter is set to the appropriate value.

Somehow it was very easy to obtain a VAXstation 4000 model 60, there are six of them right now in the collection!

Subsequent system that arrived were another VAXstation 4000-90A with very little memory and two VAX 4000 model 100A systems with 128MB each. The VAXstation 4000-90A has just 32 MB of main memory and is used as a backup store for the data disks of the primary VAXstation 4000-90A, viz. the system that maintains the content of this webpage.
The VAX 4000-100A is interesting because it has a DSSI bus as well as a SCSI-2 bus. The DSSI bus uses connectors for disks and tapedrives that are the same as the well known 50 pin, narrow SCSI-2 connectors, but all similarity ends there. The DSSI devices all have an on-board controller and the bus behaves like the CI bus, but at lower speeds. Besides peripherals, the DSSI bus also connects systems and carries the SCS protocol for VMS clusters. Which is another difference with two node SCSI clusters where SCS traffic runs over the network, ethernet, ATM or FDDI. DSSI adapters are supported on VAX and Alpha. Connecting disks to a common DSSI bus in a two node VMS cluster opens the possibility to define one disk as a quorum disk. If one system is shut down then the quorum disk is discovered and will enter its own votes in the quorum calculation. Note that if the quorum disk is fitted inside the system cabinet that has been shut down and will be powered off for maintenance then those votes will be lost as well. A quorum disk is best placed in a separate disk cabinet with separate, independent mains power. A hobby cluster does not need that of course...

Early fall 2005 I bought two systems, a VAXstation 4000 VLC which is probably the smallest VAX available and a microVAX 3100-30. The VLC model (also known as the VAXstation 4000 model 30) is the entry model for the VAxstation 4000 series. It is not very fast and 24 MB is fairly limited for VMS V7.3 and Motif. The design of the box, however, makes it an interesting addition to my collection.

The microVAX 3100 model 30 has a fairly straight forward configuration. The serial number is reduced to just a vage imprint on the label so the age of the system is difficult to establish as yet. What makes it unique (for me that is) is the RX23S SCSI floppy drive. No home should be without this fine example of standardisation ;-)

The latest additions to the collection are a VAXstation 4000 model 60 and an Alpha Server 1000A. The VAXstation is not particularly important by itself, there are enough VAXstation 4000 model 60's in the collection already. This one is special because I used it professionally, several years back and it surprised me that it survived until today. In fact some of the DCL routines that I wrote at the time were still running. The Alpha Server 1000A is an example of the early boxes, bigger than the DEC 3000 series and with a lot more options. A welcome addition to the collection. The circle closes, the third system in the set is a VAXstation2000... It's a rare system for two reasons alon. One, the RD53 system disk is still running. Two, it has 14 MB of main memory which is IIRC the maximum for a 2000 series.

The table lists all my AXP and VAX systems. You don't really want to know what Intel systems we run as well, right? All systems are fully functional, with the exception of the RD54 in the VAXstation 2000 and a couple of faulty graphics controllers. The network column lists the protocol stacks (other than LAT). TCPIP and UCX refer to the Digital/Compaq/HP IP stack and DN 4 and 5 refer to DECnet phase 4 and phase 5. DECnet phase 5 is often described as a difficult product. Alternatively, phase 4 is called the real DECnet. Agreeably, the phase 4 management interface, NCP, is easy to use and in most cases provides sufficient information. NCL is more complex but it does provide more information on network statistics. NCL is more verbose than NCP and not intuitive at all.

#system cpu ratingRAM date builtOperating Systemnote network name
1VAXstation 2000 0.9 VUPS 6 MB wk15 1988 VMS V5.5-1 cluster 1 CMU IP, DN 4I
2VAXstation 2000 0.9 VUPS 14 MB wk12 1989 VMS V5.5-1 cluster 1 CMU IP, DN 4Cu
3microVAX 3100-10e 3.4 VUPS 20 MB wk35 1991 VMS V6.1 stand alone UCX 4.3, DN 4S
4microVAX 3100-20e 3.4 VUPS 16 MB wk35 1991 VMS V6.1 cluster 4 UCX 4.3, DN 4Se
5microVAX 3100-20e 3.4 VUPS 16 MB wk35 1991 VMS V6.1 cluster 4 UCX 4.3, DN 4As
6VAXstation 3100 2.8 VUPS 16 MB wk51 1989 VMS V5.5-1 cluster 1 TCPIP 5.1, DN 4Rn
7VAXstation 3100-M483.8 VUPS 24 MB wk29 1991 VMS V7.2 cluster 2 TCPIP 5.1, DN 5He
8microVAX 3100-30 4 VUPS 24 MB ? VMS V7.3 stand alone TCPIP 5.1, DN 4Pb
9VAXstation 4000 VLC10 VUPS 24 MB wk51 1992 VMS V7.3 stand alone TCPIP 5.1, DN 4Zn
10VAXstation 4000-60 14 VUPS 32 MB wk27 1992 VMS V7.2 cl.2; dual headTCPIP 5.1, DN 5Xe
11VAXstation 4000-60 14 VUPS 32 MB wk24 1992 VMS V7.3 cluster 3 TCPIP 5.1, DN 4P
12VAXstation 4000-60 14 VUPS 24 MB wk49 1991 VMS V7.3 cluster 3 TCPIP 5.1, DN 4Co
13VAXstation 4000-60 14 VUPS 56 MB wk27 1992 VMS V7.3 stand alone TCPIP 5.1, DN 4Hg
14VAXstation 4000-60 14 VUPS 32 MB wk25 1990 VMS V7.3 stand alone TCPIP 5.1, DN 4Ag
15VAXstation 4000-60 14 VUPS 32 MB wk24 1991 VMS V7.1 stand alone TCPIP 5.1, DN 4B
16VAXstation 4000-90A25 VUPS 64 MB wk51 1994 VMS V7.2 cluster 2 TCPIP 5.1, DN 5Cl
17VAXstation 4000-90A25 VUPS 128 MBwk51 1994 VMS V7.3 slave DNS TCPIP 5.1, DN 4Ar
18VAXstation 4000-90A25 VUPS 80 MBwk16 1994 VMS V7.3 spare part TCPIP 5.1, DN 4Ni
19VAX 4000 model 100A14 VUPS 128 MBwk50 1993 VMS V7.3 DSSI clusterTCPIP 5.1, DN 4Ce
20VAX 4000 model 100A14 VUPS 128 MBwk50 1993 VMS V7.3 DSSI clusterTCPIP 5.1, DN 4Cr
21VAXstation 4000-60 14 VUPS 32 MB wk.. 199. VMS V7.1 stand alone UCX 4.1, DN 4B
22Alpha Server 800 ~ 190 VUPS1 GBwk35 1998 Tru64 V5.0 & VMS V7.3-2 :-)tcpip DN 5Er
23Digital Server 5305~ 450 VUPS4 GBwk19 1999VMS V7.3-2 master DNS TCPIP 5.4-4, DN4Os
24Compaq XP1000 ~ 260 VUPS2 GBwk18 1999VMS 8.3 & Windows 2000TCPIP 5.4-15Fe
25Alpha Server 1000a 5/333~ 60 VUPS704 MBwk40 1997VMS 7.3-2DAT drive DN 4F
26Alpha Server 1200 ~ 450 VUPS2.0 GBwk19 1999VMS 8.3 big cluster DN 4Sn
27Alpha Server 1200 ~ 450 VUPS1.75 GBwk08 1999VMS 8.3 big cluster DN 4Ba
28Multia VX42 4/233 21 VUPS 128 MB wk36 1995VMS 7.3 (!) EV4/233 DN 4Cm
29Alpha Server 1200 340 VUPS1.5 GBwk03 1998VMS 7.3-2 big cluster DN 4Br
30DEC 3000 model 300X 24 VUPS144 MBwk40 1994VMS 6.1 6.1 cluster DV 4O
31DEC 3000 model 300X 24 VUPS32 MBwk15 1994VMS 1.5-1H1 v1.5! DV 4C
32Alpha Server 8/500 ~ 190 VUPS1.0 GB True64 V5.1A DN 5 endnodeNa
33Alpha Server 300 49 VUPS 192 MBwk29 1997Tru64 5.1Astand alone systemTCPIP, DN 5K
34Alpha Server 800 120 VUPS 768 MBwk32 1998Tru64 5.1A&VMS big clusterTCPIP, DN 5Bi
35Alpha Server 1000a ~60 VUPS 1024 MBwk12 1997VMS 8.3big clusterTCPIP, DN 4U
36Alpha Server DS10 213 VUPS 768 MB wk44 2000 VMS 8.4IDE disksTCPIP, DN 4O3
37Alpha Server DS20E ? VUPS 4.0 GB wk04 2001VMS 8.3KZPCC RAIDTCPIP, DN 4Ti
38Integrity rx2600 ~1800 VUPS 16.0 GB wk04 2004VMS 8.4two 1.3 GHz cpu'sTCPIP, DN 4Au

As you've probably noticed, the systems are named after chemical elements. Names of elements that are six characters in size or less are reserved for systems that run DECnet phase IV. There are enough of them for my little collection. The other names are used by unix, linux and Windows systems. It is extremely unlikely that I'll ever own more than 100 systems, right :-) But it is getting harder to find names of chemical elements that have six characters or less. Sometimes the name in English will work, e.g. Sodium which fits while Natrium won't do.
The Digital Server 3000 was upgraded on 24 December 2004. Memory increased with 128 MB, 50% up, and the processor was changed from a 5/400 to an Alpha Server 800 5/500 processor. The system used to run SETI workpackages in nearly the same time as a uniprocessor Digital Server 5305. The 5/533 took a little more than 8 hours, the 5/500 8.5 hours. The 5/533 processor has 4 MB cache, while the 5/500 is limited to 2 MB and that might explain the difference. Past tense because SETI was replaced by BOINC in 2006 and that program hasn't been ported to VMS. When a courageous soul succeeds in doing so, let me know and the Alpha's will run again.
In December 2008 the Alpha Server 800 got a memory upgrade. All memory banks were filled with 128 MB boards, so the installed main memory size is now 1024 MB. The system supports 256 MB boards but considering its current use 2 GB of main memory would be a waste of time.
The system runs Tr64 V5.0 and, like VMS, recognized the memory increase without the need for tailoring or changes in configuration files. Altogether a worthwhile investment.

The first Digital Server 5305 in the collection is a dual processor system. On 3 June of the same year (2003) I bought a second 5305, mainly for the processor and the additional memory. Its panels looked better too, but more importantly it means spare parts as well. Especially the power supplies are important because they may be the first to break down and are probably rather difficult to come by. The second machine has a serial number that ends with 1879 while the first machine ends in 1881. Quite a coincidence.
Memory and processors for these aging systems are not so expensive anymore. In the fall of 2006 I was able to buy two Alpha Server 1200 boards fitted with the 400 MHz version of the EV56 processor. Along with a little memory (128 MB), controllers and disks that were just collecting dust as spare parts thrown in and the system now happily runs VMS V8.3.

In February 2007 that system got upgraded to a dual processor 5/533 system. No changes to VMS, not even an AUTOGEN, just swapping the hardware was sufficient to make it go.

Actually making the Mylex DAC960 board work was most of the problem.

The most remarkable issue of the upgrade was the memory increase. During January 2006 more 32 MB memory boards were added to the system so all memory banks were filled. That is contrary to the memory configuration rules as found in the Systems & Options Guides for the 1200 and 5305 models. Apparently the restrictions were lifted by an earlier VMS release. Anyway, 512 MB is just fine for this machine. The two 400 MHz processors have been replaced by one 533 MHz board.

In May 2009 two additional Digital Server 5305 systems arrived. They hadn't been used for a few years and obviously they were in the way. All disks had been removed, luckily they kept the cannisters. One system had two cpu's, no memory, the other just one processor and had eight 128 MB memory boards. The systems were fitted with the crippled, Windows NT-only KZPCM controllers and these were replaced. The graphics card (S3) works fine with VMS. Each system got a KZPSA adapter to build the SCSI cluster with. The next step was the upgrade for the SRM firmware and putting the incantations in nvram. Both systems now run VMS V8.3.

The Multia was not really fit to run VMS owing to its slow cpu performance and limited memory. The system was configured with 40 MB memory and that just won't work with VMS any more. After an hour uptime the Multia suddenly crashes, even without any work being done on it. Definitely not a stable configuration then.
Shortly after that the Multia has been upgraded to 256 MB and VMS runs for longer periods of time. The Multia is a hot box and the large memory boards don't make it easier. More chips so more heat is emitted. Next, the 64 MB boards are high, they only just fit inside the box. These boards have been replaced with 16 MB boards and the system runs well even in a hot room. The other reason it may be unstable is that it runs VMS V7.3 and not V7.2 for which the Multia patches were intended. To make this work in the first place, the SRM console must be changed and additional VMS software is needed. Upgrading SRM is simple emough, installing VMS requires patience and a BC09D-03 SCSI cable to connect to an old fashioned VAX style storage expansion cabinet.
As it happens, the hobbyist kits that add support for the Multia are for V7.1-2 and V7.2 and I have neither. All I had was a V7.3 CD so I used that instead. Sure enough, at the end the installation procedure asks you to add foreign support for the disk driver. That procedure fails because the VMS version does not match. The system was rebooted, as follows:

>>> b dka0,dva0 -fl 0,80000

That way VMS boots off the hard disk so all you have to do is copy SYS$CPU_ROUTINES_0B04.EXE;1 from the floppy disk to SYS$LOADABLE_IMAGES. That worked and the Multia would boot from harddisk without the floppy. Since the expansion cabinet held a SCSI disk it was possible to experiment a little more. An image backup was created and the other device drivers were copied from floppy to that same directory (GY, EW and EWB drivers). After that DECwindows/Motif ran properly too. So VMS V7.3 will run on a Multia, albeit slowly and as said rather unstable. Putting in 64 MB memory may have cured that though.
The Multia runs DECwindows and XDM. The cpu is slow but not annoyingly so. Two things are somewhat problematic though. First the built in 2.5" disk is slow and small at a little over 500 MB. Second, the system runs hot. I'm used to the heat output of the 5305 but the Multia really runs hot.
The first addition to the collection this year, it is January 2008, is an Alpha Server 1200. An original one, in the powder blue cabinet that was fashionable for the Alpha Server models. The system vame with one 5/400 cpu and 256 MB memory and Debian on its disks. Most notably, the V5.0 firmware was under the impression that it was an Alphaserver 4100, an entirely different beast. The firmware cd's are bootable, so the firmware was upgraded first: via V5.3 to V6.0. After the second step the system suddenly remembered its correct model again. A second CDrom was installed, along with allremaining 32 MB boards that came out the 5305. Next, VMS V7.3-2 was installed to make sure everything was alright. There's no immediate purpose for this machine, that is, not yet right now.

October 2009 saw the arrival of an Alpha Server 300, a kind and much appreciated gift by Mr. Dietrich Fuessel. It is actually the first AS 300 that I've ever seen. It came with 192 MB memory and an 18 GB disk inside so it contains sufficient storage. I tried to swap four of its 32 MB memory boards with the bigger 64 MB boards inside the Multia. But the Multia didn't like the 32 MB boards and the larger 64 MB boards wouldn't fit inside the Alpha Server 300. That is, not without modifying the internal SCSI cables inside. The system ran VMS V8.2 to get familiar with this new (for me) hardware model. The installation and behaviour of VMS is very familiar so it is easy to spot interesting attributes. Performance was an immediate surprise. The system has one 18 GB Connor disk that snarls loudly at system startup but otherwise performs very well indeed. The system runs very well, given its age and a fairly slow 4/266 processor. Even though it has similar specifications as the Multia, it is about twice as fast. The Alpha Server 300 now runs Tru64 V5.1A.
A second Alpha Server 1200 arrived in March 2010, courtesy of Martin Hoogenboom. It came with 18 GB disks, two 5/533 processors and 512MB memory. With disks that size a Mylex 960 controller is simply too slow. It takes a day to configure eight JBOD sets on a system fitted with just 2GB and 4GB. I can't even think how much time it would take to build RAID sets from 18 GB disks, provided the configuration rules allow useful combinations. A RAID-5 set composed of two 18 GB disks is hardly worthwile. So the Mylex was replaced with the last KZPCM-DX spare. Eventually the system will be configured to run Tru64 V5.1A.
Two weeks later Eric Koenders gave me an AXP 3000 model 300X. It is interesting because it runs VMS with very little memory indeed. Possibly the only reason it works is because it runs a very early version of VMS: V1.5-1H1. Other than giving it another DECnet address and name, I'll leave the system as it is.


April 2011: two systems arrived. An AlphaServer 800 5/333, previously owned by Dietrich Fuessel, which is in excellent cosmetic condition too. Some systems show their age but the AS800 looks as if it was delivered by Digital just last month. It has VMS installed, but eventually the system will have Tru64 5.1 installed.

The other system is an AlphaServer 1000a 5/333, courtesy of Victor Palmboom. This system is special because it is the only rack mount system that I own. There's no VGA adapter in the system though, which makes configuring the Mylex 960 a real problem. Its firmware required an update to run VMS V8.3. Another 256 MB memory was added, so one memory bank remains unused.

August 2011. The collection keeps on growing and the second computer rooms slowly fills up with two new and fast systems: a DS10 and a DS20E. Especially the latter is an impressive system: two 666 MHz EV67 processors and 4 GB main memory. Which makes this now the fastest Alpha here.

So what do all these systems do? The Alpha's do serious work. I try to keep up my programming skills, low as they are. The XP1000 and the 5305's connect to the Internet (browsing, downloading VMS patches etc.). Occasionally one 1200 (ex 5305) is used to upload VMS software to hobbyists. One VAXstation 4000-90A (argon) is used to maintain these pages. Other than that, the VAX systems are hardly powered up these days. Until 1996 there were always three VAX systems running in a cluster. After that, Alpha's replaced the VAXes for daily use and the price of a kWhr went up, to the extent that it was no longer affordable to have computer systems up 24 hrs per day.

In March 2013 I purchased an Integrity RX2600. Ten years after correctly predicting the day VMS would first boot succesfully on the IA64 platform. And now at long last I own one! It's fast; its hardware unfamiliar and booting is still somewhat strange (with EFi). I'll get used to it though. And thanks Ate and Marco!

One alpha left the list: node SODIUM, a Digital Server 5305. The system wasn't used at all. It was regularly booted just to check that all was still well and that was just about all it did. Hopefully Maarten will get some work out of it.

The ambient temperatures dropped in october 2013, sufficiently to power up the big (and power hungry) Alpha's in the attic. About two minutes after it was powered up Osmium blew a power supply. Node Cesium was sacrificed to keep Osmium going. It proved easier to exchange the power supply than swapping all the peripherals and memory. Dietrich gave me another Alpha Server 800 (the new Natrium) and it was fitted with an additional RAID controller, the KZPCC and one large 146 GB SCSI disk. This system will be used to store all dd'ed images of my Digital CD collection. That will take some time though.

Getting these systems was fairly easy. There are quite a few retailers of Digital gear that simply keep the old equipment that was traded in. Somehow they do not want to scrap those systems. The build quality of these systems is outstanding. It may be argued that Digital hardware was expensive and often not competitive in terms of cpu performance or IO bandwidth, but it was well engineered and manufactured. The VAXstation 2000 proves that statement. Its harddisk failed during the fall of 2002, then again it is a Maxtor, not a Digital device.

The Maxtor RD54 was not a bad drive, compared to the competition. Sadly the brand deteriorated considerably. between March 2003 and August 2004 we had four Maxtor drives fail without any warning, two 30 GB disks and two 40 GB disks. All Windows system disks so it's a real pain to replace the hardware. Never a Maxtor again, not even in a Wintel box, even though the later models seem to have improved.
Given the problems we've had with Maxtor IDE drives the RD54 is a miracle of longevity.
For IDE drives my preferences are Western Digital and Fujitsu. Preferred brands of SCSI drives are Seagate and Quantum. And of course the drives built and or sold under the Digital logo. The RZ24 and RZ25 disks are very small at 200 MB and 500 MB, but they are still spinning. These drives were built in 1991. The ephemeral lifespan of a Maxtor drive is a pitiful performance compared to that.

Two of my AXP's come out of their stock and I am very pleased with these systems and the support I got from Wetec. Wetec is an IT company located in Weert, the Netherlands. They no longer stock used Digital gear. They are still involved with VMS though and specialize in storage solutions that far exceed the needs of a hobbyist user.

The VUPS unit of processor speed is only useful when comparing VAX systems of course. The speed ratings for the 5305 and 3000 are based on two sets of measurements: based on output of a DCL utility found on the Internet and a small Pascal program. The latter was found to be fairly accurate on various VAX systems. It was tried on the following systems:

The program tends to underestimate faster processors so the AXP ratings are probably not very accurate. The clockrates for the three Alpha's are very much alike which might seem to indicate similar performance. However, the processors were made in different processes (EV5, EV56 and EV6) and that difference in technology is reflected in the relative performance.


up
next


Computer Simulators

Collecting computer hardware is a nice hobby, but has a few drawbacks. One: the equipment takes a lot of physical space. Two, these systems generate a lot of heat. In other words the electricity bill gets very expensive. Initially I kept my first VAX, a VAXstation 2000 powered on at all times. It is probably one of the reasons that the system is still alive today. A real PDP-11 or PDP-10 system (let alone a Burroughs/Unisys A series mainframe) is out of the question. The solution is to get yourself a simulator. There are quite a few simulators around and some of them are freeware. One that I found very interesting is Bob Supnik's simh software package. It contains simulators for a lot of different architectures and runs on a wide range of platforms, like VMS, unix and linux flavors as well as Windows operating systems. It is quite an experience to see an operating system running again after a long time.

Simh comes with documentation but is still not intuitive to set up. The simh site now has a FAQ that explains the basics. Building one of these simulators on VMS is possible but the scripts supplied with the distribution kit probably assume more recent compilers than are running on my systems. Simh version 2.10 does not agree well with DEC C V5.6, at least not on my systems. So the simh kits that run under VMS provided below are based on the simh V2.9 distribution.

The following simulators are available for downloading from this site:

  1. simh 2.9-10 images, compiled with DEC C and linked under AXP/VMS V7.3
  2. simh 2.9-9 images, compiled with DEC C and linked under VAX/VMS V7.2
  3. simh 2.10-3 images, compiled with Visual C++ under Windows XP Home Edition
  4. simh 2.10-4 images, compiled with gcc under RedHat 9

The latest versions that run on Windows/Intel platforms support ethernet. The executables run under XP and Windows 2000. Note that you need additional software to make them work with ethernet support, even with a pre-compiled executable. The simh site provides the relevant pointers. The simulator has access to the LAN and can participate in a cluster. It is even possible to boot a real VAX from a simh cluster boot host.


There is no Alpha support in simh in version 3.8-1. There are other alternatives though to run a current version of VMS on an Alpha. Stromasys offers the Personal Alpha product and Migration Specialties offers a free beta version of FreeAXP. They simulate an AXP 3000-500 and an AlphaServer 400. Both products are easy to configure. FreeAXP has a tool built-in to create empty container files that emulate a disk. Both products offer direct access to the cd-rom drive of the host which makes the installation of an operating straightforward indeed.
There's no need to create an image file of the distribution media. Note that the unix dd command is very useful in this respect. It also works on VMS cd's; on a Tru64 V5.x system the command would be something like this:
dd if=/dev/disk/cdrom0c of=AXPVMSRL083.dd
Note that the image file produced this way, can be mounted as a logical disk with the VMS LD command. And Nero will burn the file to a cd which makes it fairly easy to produce copies when the original cdrom got damaged.


up
next


White Box Alpha's and various Operating Systems

The white box Alpha's were sold by Digital Equipment Corporation into the WindowsNT market. The intention was to sell 64 bit processing power to WindowsNT customers at lower hardware prices than those charged for VMS and Tru64 capable systems. It probably did not make much sense to design a new WNT-only hardware platform. The solution was simpler, modify ("cripple") a regular VMS, Tru64 , WNT capable system. Since WNT uses an entirely different bootmode than VMS and Tru64 the solution was to either lock the platform into AlphaBIOS mode or to disable the SRM console sufficiently. The white box Alpha's under discussion here belong to the second category and this text will tell you how to, err, remove this restriction. The procedure was not invented by me, it is easily found in the appropriate usenet groups with Google or any other search engine. Mr. Ricardo Ramos suggested that the information ought to be more readily available, so here it is.

DISCLAIMER
Note carefully that the procedures outlined in the following paragraphs are not supported by the manufacturer. Neither VMS nor Tru64 were designed to run on unsupported hardware. That said, the information provided in these pages has worked well for my systems and for others.

If you got to this part then you ought to realize you're on your own and that if something unexpected happens that there's no way you can get support from Digital, Compaq, Hewlett Packard, right?
Don't let the formal language scare you. You will not need to modify the hardware: there is no soldering involved. All changes are in non-volatile ram memory and reversible. Adding memory to you system, now that is a tricky business :-)
Be assured that I'm willing to answer your questions.
If you install VMS make sure you have a valid hobbyist license from Montagar. The licenses themselves are free, all you need is a serial number from your system. Simh users may enter a serial number they've made up themselves. Note that the serial number shows up in the licenses. Montagar also provides media kits (a.k.a. H-kits) at a *very* reasonable price, around US$ 30 for a cdrom.
There is no longer a Tru64 hobbyist programme aavailable from HP. If you want to run a unix flavour then netBSD or one of the Linux distributions are the obvious choices.

The white box Alpha's under discussion here were sold under the name Digital Server, and as the name suggests they are painted bright white. The systems labeled Alpha Server came in a fancy colour called "gunpowder blue". The following table lists the Digital Server model numbers and the corresponding Alpha Server model, along with the last firmware software version released by Digital Equipment.

Digital ServerAlpha Serverconsole software
3000800V5.8-16
53051200V6.0-4
73054100V6.0

The Digital Server 5000 series also included models with Pentium processors, e.g. the Digital Server 5200. At first glance these systems are remarkably similar to the 5300 and 5305, the models that have an Alpha processor. A 5300 has a an Alpha 5/400 processor while the 5305 has a 5/533 processor. The 5300/5303 support up to two processors and up to 4 GB of main memory. The number of processors is not reflected in the name (like the VAX 6xxx series). If you decide to buy a Digital Server 5000 model, make sure what model is being sold. Something similar also applies to the Digital Server 7000 series. Check the Hewlett Packard pages for more information on retired Alpha systems. Besides information these pages also have firmware, microcode and PALcode on-line. The Digital Server 3000 and 5305 models are probably the best choice for a hobbyist system because they are not extremely heavy and have reasonable low power consumption and heat output; read an affordable electricity bill.

The first step is to make VMS run on these systems involves modifying a console file called nvram. The SRM console supports unix like commands like ls and cat. So check whether the file actually exists and that it is empty:

  >>> ls -l nvram
  rwx- nvram            0/2048                  0    0   nvram
  >>> cat nvram
  >>>

This shows that the file is indeed empty. Use the editor to modify the file:

  >>> edit nvram
  10 set boot_reset on
  20 set srm_boot on
  ^Z
  >>> cat nvram
  set boot_reset on
  set srm_boot on
  >>>

Note that ^Z is shorthand for Control-Z. The editor is very simple, >>> help edit will tell you all there is to know. The line numbers do not show when the file contents are listed. Mr. Ramos suggested another way to modify the file. He used the command:

  >>> cat > nvram
  set boot_reset on
  set srm_boot on
  ^Z
This is obviously more compatible with a unix way of doing things, and for sure avoids the functionally challenged editor.


The new console variables are stored in nvram, which is read when the system is initialised. Which is why boot_reset must be turned on as well I guess. The advantage of this method is that clearing the contents of nvram reverts the system to its original condition. The downside is that with a weak battery nvram contents may get lost and that may be annoying. There is another way to create console variables:

  >>> create -nv srm_boot on
This way the console variable is created permanently and its behaviour is similar to all the other factory built-in variables. It seems there is no srm command to remove console variables so typos may be fairly persistent...

The SRM console maintains the type of operating system and you should set that value:

  >>> set os_type openVMS

The openVMS value also suits Tru64 and the linux flavours just fine. Once set there's no need to modify this parameter.
Now switch off the computer and wait at least a minute before switching power on again.
Installing VMS is pretty much straight forward. The distribution is bootable:

  >>> b -fl 0,0 dka500

The installation procedure asks a few questions. It needs passwords for two privileged accounts. It also asks for a nodename and an identification. The nodename is limited to 6 characters (alphanumeric). The identification translates to the DECnet address. If you don't need DECnet, use 1025.

VMS is a great operating system. One of the reasons for its terrific reliability is the rigorous testing procedures each new release must pass before it is semt to customers. The downside is that these procedures are very expensive and thus hardware support is somewhat more limited than with other operating systems. Check the openVMS FAQ for supported hardware, especially the chapter about graphics controllers.
The Digital Server 3000 comes with a built-in SCSI controller that is supported by VMS and Tru64. No problems there. Download the appropriate operating system installation manual from the HP site and you can start the installation. The Digital Server 5305 may be fitted with a 53C875 SCSI controller that has crippled firmware so that VMS won't recognize the controller. VMS reports a CRC checksum error and fails to recognize the controller and thus VMS will not load device drivers for the SCSI disks connected to that controller. Either you buy a supported SCSI controller or you can put a SCSI-2 disk next to the CD-ROM drive. There is a spare 50 pin SCSI-2 connector and power connector available in the unused top bay.
This may possibly apply also for the Digital Server 7305. I've never even seen a 7305, just the Alpha Server 4100. So no idea what kind of hardware is likely to be present in the 7305 series.
HP stil has the Systems & Options Catalog pages for the Alpha Server 1200 on line.
Even though it is possible to run VMS on white box Alpha's, running Tru64 is less easy. Tru64 installs on a Digital Server 3000, versions V5.0A and V5.1B worked well.
However, installing Tru64 on a Digital Server 5305 just does not work. The process fails immediately after the kernel is loaded from the cdrom and panics when it tries to allocate swapspace. The same cd works fine on an Alpha Server 1200. So there still is a puzzle to solve.


up
next


The DSSI bus and DECnet

The DSSI bus uses the same class driver as the CI bus: PADRIVER. In the eighties CI clusters were quite popular, especially in companies were uptime was considered important. In those days systems failed more often than now, sites with more than 40 computers saw bi-weekly visits from their friendly field service engineer. And DEC gear was considered to be first class equipment in terms of engineering and build quality.
The CI bus was designed as a high speed (70 Mb/s) peripheral bus. It connected systems and peripheral controllers. The systems were high end VAX models, like the 11/780, 8650, 8550 and the 6000/7000 series. The CI adapters connected to the UNIBUS, BIbus or XMIbus, all highspeed internal buses in their day. A third party vendor built a CI adapter for the Q-bus when the faster VAX 4000 systems were released (e.g. the VAX 4000 mode 705A). The controllers were the HSC models for the RA and TA disk and tape devices and later the HSJ models connected SCSI peripherals.
But the CI bus could also be used as a physical layer for DECnet. Not that the CI bus was particularly suited for DECnet but a CI interface and an ethernet interface for the UNIBUS were very expensive devices. If a system did not really need an ethernet interface then the CI would do nicely. Since one system in a CI VAXcluster needed a DECnet routing license anyway, there was no extra cost for additional software. Really a neat solution at the time.
Please note that DECnet at that time was the glue between different systems and different networks of all kinds of vendors. There was DECnet support for ethernet, serial and parallel lines, X25 and all kinds of cabling systems. Besides VMS there was DECnet for RT-11, RSX-11, RSTS and ultrix (DEC's unix flavor), as well as for AIX, Motorola unix and Apollo hardware. Layer 4 gateways existed that translated DECnet over 10BASE2 ethernet to VTAM PU's and LU's with a channel attached controller or to a 3174 control unit. Today IP is seen as the de facto open protocol, DECnet did just that two decades earlier, albeit on a smaller scale though wide area DECnet networks certainly did exist.

DSSI is rather similar to CI, so the question was whether DECnet would run over DSSI as well. The practical use of the circuit is limited since all VAX and Alpha systems that have DSSI are also equiped with on-board ethernet or factory fitted with an ethernet interface. So the experiment is rather academic. The text that follows was also posted in comp.os.vms (July 2005). The discussion shows that the human memory is remarkable but not absolute. That's why the results are also on this page.
DECnet over DSSI works just like DECnet over CI as documented in the DECnet for OpenVMS Networking Manual, chapter 5. The text in paragraph 5.2.3.1, Running DECnet over the CI, is syntactically correct but in my opinion somewhat misleading. This DECnet manual is clearly intended as a tutorial. E.g. it explains DECnet over serial lines in detail, with many examples and all the required commands are minutely explained. The first step is that the appropriate driver must be loaded. In a DSSI cluster that means that all two (or three) systems involved must execute a DCL statement in SYSTARTUP befor the network is started.

	$ MC SYSGEN CONNECT CNA0/NOADAPTER
Next, you need to know that DSSI and CI address nodes on what is basically a multidrop bus. So the node address is essential information. DSSI follows SCSI in the sense that systems generally have high node numbers, 6 or 7. The VAX console offers the SHOW DEVICES command and for my own two systems the address list was as follows:

device name node name DSSI address SCSI address DECnet address
VAX 4000-105A CERIUM 6 6 1.18
VAX 4000-105A CHROOM 7 7 1.20
RF31 DISK0 0 n/a n/a
RF31 DISK1 1 n/a n/a

The column that matters is labeled DSSI because it contains the node numbers for the systems and the disks on the DSSI bus. The previous administrator must have changed the SCSI address to match the one for DSSI even though a shared SCSI cluster is not a viable option for VAX systems.

The DECnet license is an issue here as well because if the systems only have an end-node license (called DVNETEND) then only one line is supported in the on state. The Montagar license kit contains a routing license (DVNETRTG) and that ought to be loaded. The DCL statement that runs the license tool to load DVNETRTG is included just to make this point.

The correct procedure to run DECnet over DSSI is as follows for node CERIUM:

	$ mc sysgen connect cna0/noadapter
	$ license load dvnetrtg
	$ mc ncp set exec state off  
	$ mc ncp def line ci-0 state on
	$ mc ncp def circ ci-0.6 tributary 7 state on cost 2
	$ @startnet

The DSSI bus behaves as a multidrop bus, but DECnet circuits are point to point by design irrespective of the physical layer. Each node must define its own address as well as the station address at the other end. The local address is found in the name of the circuit itself, CI-0.x for DSSI address x. The remote address is defined by the keyword tributary, which value must match the remote system's DSSI address (in this case 7).
For those systems with two DSSI buses, like the VAX 4000-105A, a second line must be defined (CI-1) and the corresponding circuits are called CI-1.x. Possibly one ought to connect device CNB0 in SYSGEN as well to make this work.
If there is more than one path between two nodes, DECnet uses an attribute called cost to compute the best route. The default cost for a CI circuit is 10 and just 4 for an ethernet link. To make the CI circuit do any work at all either reduce the cost of the CI circuit (as in the example above) or raise the cost of ethernet circuit (ISA-0).

At this point, when node CHROOM is not yet configured, the circuit CI-0.6 state is on, its substate is synchronizing. For the other node, CHROOM, the correct steps are:

	$ mc sysgen connect cna0/noadapter
	$ license load dvnetrtg
	$ mc ncp set exec state off
	$ mc ncp def line ci-0 state on
	$ mc ncp def circ ci-0.7 tributary 6 state on
	$ @startnet

Note that the tributary address defined on CHROOM must match the cicuit name of CERIUM and vice versa. The ethernet circuit on node CHROOM was defined STATE OFF. The DCL command SHOW NET/OLD on node CERIUM resulted in:

	$ sh net/old
	OpenVMS Network status for local node  1.18 CERIUM on 15-JUL-2005 18:06:32.21


	              Node         Links   Cost   Hops   Next Hop to Node
	
	          1.9    OSMIUM       1      4      1    ISA-0   ->  1.9   OSMIUM
	          1.20   CHROOM       1     10      1    CI-0.6  ->  1.20  CHROOM

	                Total of 2 nodes.
	$

Where OSMIUM is a Digital Server 5305 that runs DECnet over ethernet. The ISA-0 ethernet circuit on node CERIUM was switched off. This way the CI-0.6 circuit became the active circuit.
The DECnet documentation is somewhat vague about the circuit definitions. The example defines (actually it uses NCP set commands) two circuits on one node and offers no further explanation why certain values are used, let alone their purpose. Two circuits over the same line (CI-0) indicate a machine like a VAX 4500 or comparable system, that allows DSSI buses with three nodes attached. This is a different configuration than a VAX that has two DSSI channels that connect both to another, single system.



up
next


Hardware Information

Identifying hardware is not a simple matter. An RA80 is simple, an 121 MB Winchester type diskdrive. The RZ23 was slightly more complicated, it came in two versions with different storage capacities. The RZ26 and RZ28 disks were even more difficult because two, say, RZ28 disks, might have been made by different manufacturers. While the total blocks capacity matched the specification listed in the System & Options Guide or possibly even exceeded it somewhat, it was nevertheless bad news for system managers that tried to create shadow sets (or RAID-0).
A Digital Server 5000 was a generic name for three different systems. It was either powered by an Intel cpu or one or two Alpha's. The detailed product name gave away what was actually inside, a 5200 had the Intel processor, a 5300 had a 400 MHz EV56 Alpha and the 5305 had the 533 MHz Alpha inside.

A close inspection of the label on the back of the box provided more information. One Digital Server has the following information on the label:

	MODEL   FR-K3F5W-AB
	SN	NI82808197
	PN	5305  6533A
With disks the information gets more complicated. On the bottom of a white SBB canister one may find the following data:
	MODEL	RZ1BB-VH
	SN	AY82811490
		FR-CDCBA-CA
		70-31488-43   A01
Note that the MODEL and PN fields have different meanings. And another identification is added. This is possibly used for field maintenance, to identify the part number used in their spare part stocks. The A01 is a version and patch number that identifies the firmware embedded in the device.

There is a lot more to tell about these codes and when I get the information it will be added...



Hardware serial numbers

A Digital serial number, or SN on the label that is glued to the back of the system, uses this simple format:
	FFYWWsssss
This somewhat obscure code may be translated as follows:
        FF : the factory code; a two letter abbreviation that identifies the plant
	Y  : the last digit of the production year; so 1 means either 1991 or 2001
	WW : the week in that year; a two digit value
        ss : a five digit sequence number; however:
	      - the first digit may be an A, possibly indicating that more than
		100000 units were actually made in that plant
The single digit for the production year may be ambiguous. In reality most hardware was manufactured in short, 2 to 5 year intervals.
A serial number carries information that may be of interest, especially if you must pay hard cash for the system. Even then, the model will probably tell you about as much about its age as the serial number will.
An Alpha Server 1200 with the serial number AY80300302 will tell you that the system was manufactured between 12 and 17 January 1998. Then again, an Alpha Server 1200 was produced between the end of 1997 and early 1999 (if that late).
The trailing 302 tells you that it was the 302-nd 1200 that was built there. The AY explains where: in Ayr, Scotland (UK). A lot of Digital gear found in Europe carries an AYxxxsssss serial number. But there are other plants and comp.os.vms has a list of abbreviations for these plants.
	AY = Ayr, Scotland 
	GA = Galway, Ireland 
	NI = Salem, New Hampshire, USA 
	PC = Irvine, Scotland 
	KA = Kanata, Ontario, Canada
	KB = Kaufbueren, Germany (RA8x plant)  
	CX = Colorado Springs, CO, USA 
	WF = Westfield, MA, USA 
	AB = Albuquerque, NM, USA 
	IQ = ? 
	HY = ?, Japan (this is where LN03s were made. 
	CA = Misato Japan  (where the LN09s were made) 
	TY = ?, Japan      (where the LA75s were made) 
	TA = ?
	PY = ?
	KL = ?
The list is not entirely complete and possibly also misses entries as there may have been more plants. DEC had a large refurbishing plant at Nijmegen in the Netherlands. I'm not sure but possibly they issued their own serial numbers.



Power Consumption

Computers run on mains power so no surprise when you run a few compuers at the same time that your electricity bill gets seriously influenced. Running old gear is not a very expensive hobby but the power bills hurt more every passing year. The main surprise is that VAX and especially Alpha systems remain connected to mains power even with the power switch in the off position. TV sets and pc's do this to, though Alpha's tend to be somewhat power hungry.
The table below lists power measurements for a few systems. All measurements were done with a Brennenstuhl PM230. It measures power, actual voltage supplied on the socket, current drawn and the phase angle (cos(phi)). The systems were connected to 230 Volts standard European mains voltage.
  --------------- hardware ----------------------     ----- power -----
  platform                memory      peripherals      off         on
  Intel pentium IV pc     2048 MB     5 IDE disks      11 W       140 W
  VAXstation 4000-90A       32 MB     3 SCSI disks     15 W       121 W
  Alpha Server 800         512 MB     5 SCSI disks     10 W       150 W
  Alpha Server 1000A       768 MB     5 SCSI disks     36 W       210 W
  Alpha Server 1200       1.75 GB     8 SCSI disks     72 W       500 W
  Alpha Server DS10        768 MB     3 SCSI disks     17 W       179 W
  Alpha Server DS20E       4.0 GB     6 SCSI disks     46 W       420 W
Obviously the Alpha Server 1200 is the most expensive in terms of power requirements. Since I own six of these their annual power consumption was estimated to be as much as 15% of our yearly power bill. As of now all VAX and Alpha systems are behind power strips that have their own on/off switch. Although I firmly believe in keeping equipment connected to mains power, preferably switched on, because this improves the life of the system. However, mains power gets increasingly expensive. Spending 1500 kWhr on computers each year (around 400 euro) is just too expensive.

up
next


The AlphaServer 1200


Memory

The AlphaServer 1200 uses unregistered ECC synchronous DRAM according to the system specification. That is not enough information for a hobbyist to commence shopping on eBay. Fortunately for me several machines had Micron memory boards inside and Micron documents its products rather well. The documentation for MT18LSDT3272AG memory (actually a series of products) provides excellent information. The memory is 72 bits wide, PC100 and PC133 compliant, uses a single +3.3V power supply and supports CL 2 and CL 3 access times.
DEC offered three memory options for this machine.

	model     memory     maximum system
	number	  capacity   memory size
	MS300-BA    64 MB	512 MB
	MS300-DA   256 MB        2 GB
        MS300-EA   512 MB        4 GB
Each memory option consists of two boards. One MS300-BA kit holds two boards, 32 MB each. Note that there is no MS300-CA option in the list. A test with two 64 MB modules is planned!
Each option consisted of a pair of boards, so an MS300-DA kit contained two memory boards, with 128 MB capacity each. The AS1200 has two memory bus slots on the motherboard. A memory riser card fits into that slot and the riser card in turn has eight memory slots, identified as slot #0 .. slot #7. Slot #0 is nearest to the connector edge of the riser board. The configuration rules state that corresponding memory slots on the riser boards must have the same memory in them. Furthermore, highest capacity boards are put in the lower slots followed by smaller capacity boards. The final rule was that when only MS300-BA memory is used then only the first four memory slots may be used. This ristriction was apparently lifted a year or so after the system had been released. When firmware version 6.0 is installed there is no problem filling all slots with MS300-BA memory. The MS300-EA memory kit consists of two 256 MB boards and was not immediately available. Supporting documentation from DEC mentioned that the production of these high capacity memory chips was not reliable yet and hence production volumes still insufficient (at around 1997). Another issue was cost, ECC SDRAM boards were very expensive, so it was not unusual that newly sold systems held as little as 64 MB (in 1998 or 1999). A system with 4 GB memory would have been very expensive indeed. Interestingly, the upper memory limit for the AS1200 is still just within the theoretical memory address range of the VAX processor.
Today memory is a lot cheaper though and MS300-DA sets on eBay may be found with a price between $100 and $300. It depends who's selling to whom. There are still sites who run AS1200 in production and vendors addressing these customers sell at different prices and conditions than a hobbyist owner is willing to pay for, considering the way the system is used.
The system is getting too old for production purposes for most companies though and consequently MS300-xx options are more difficult to find. The alternative is to buy memory that also works in the AS1200 even though it does not bear the name Digital and an MS300 model number. The problem is figuring out what will work and what won't.
The specifications for MS300 class memory reported here have been determined by reverse engineering. One system contains Micron Technologies memory boards, so we know this works in Alpha Server 1200 and Digital Server 5305 systems. Micron (www.micron.com) supplies data sheets for all their current products and their spec sheets are very informative.
ECC memory is different from ordinary PC memory. ECC memory has the ability to correct memory errors. Visually, ECC memory is different: each board has 9 memory chips per side. Note that MS300-BA boards are single sided, the other two models use double sided boards. Non-ECC memory boards have only 8 memory chips per side and simply won't work.
Like pc memory, the Alpha specifications use CL and bus speed ratings. Higher numbers mean faster performance. PC100 memory may be accessed faster than PC66 memory (at 100 MHz versus 66 MHz) while CL 3 implies more frequent memory actions than CL 2 or CL 2.5 memory. The AS1200 was designed for 66 MHz memory. It will accept CL 2 as well as CL 3 memory.

Remember the specification that each corresponding pair of memory slots must occupy the same memory boards?
Well, apparently they may differ in clock speed. I installed memory pairs consisting of PC100 and PC66 boards and the system did not report errors during the power on memory test and neither did VMS complain. Nt surprisingly since the system was designed for PC66 memory.
Particular to the AS1200 and the AS400/4100 is the concept of fixed starting addresses for each memory slot. The system was designed to accommodate 256 MB boards, so if a slot is occupied with a 32 MB or 128 MB memory board the memory space is not continuous but contains holes. These memory holes are known to VMS and it knows how to deal with them.
Details may be found in Digital Technical Journal, Volume 8, number 4. The paper discusses the AlphaServer 4100 which is also a member of the "tincup" design series. I always thought that the 1200 was just "half a 4100" but that is quite incorrect. A 1200 is actually closer to an AS4000 in design. The main difference, besides the number of processors supported, is the processor speed. The AS1200 offers support for the 21164A Alpha processor running at 400 MHz or 533 MHz, the AS4x00 support 466, 533 and 600 MHz processor speeds. The 4100 addresses up to 8 GB memory, installed in four pairs of slots. The 1200 can't do that as the output of the System Dump Analyzer shows:
	System Configuration:
	---------------------
	System Information:
	System Type   DIGITAL Server 5000 Model 5305 6533A 5/5  Primary CPU ID 0.
	Cycle Time    1.88 nsec (532 MHz)                       Pagesize       8192 Byte

	Memory Configuration:
	Cluster      PFN Start      PFN Count           Range (MByte)            Usage
	#000               0            256            0.0 MB -        2.0 MB    Console
	#001             256          32512            2.0 MB -      256.0 MB    System
	               32768          32768                                      NXM
	#002           65536          32768          256.0 MB -      512.0 MB    System
	               98304          32768                                      NXM
This system is fitted with fourteen 128 MB boards and the memory holes are tagged as NXM in the output.
A pair of 512 MB boards however won't work with this system. The system reports a hardware exception on which the processor loops without recovering from it. An AlphaServer 1200 fitted with 4 GB memory reports this at the CLUE CONF command:
	Memory Configuration:
	Cluster      PFN Start      PFN Count           Range (MByte)            Usage
	#000               0            256            0.0 MB -        2.0 MB    Console
	#001             256         524023            2.0 MB -     4095.9 MB    System
	#002          524279              9         4095.9 MB -     4096.0 MB    Console 
Note that the NXM entries are gone.
Examples of third party memory boards that are known to work in an AlphaServer 1200:
	Micron
		MT18LSDT1672AG-10EC7 128MB PC100 CL2 
	        MT18LSDT1672AG-66B7  128MB PC66
		MT18LSDT3272AG-10EB1 256MB PC100 CL2
		MT18LSDT3272AG-10EE1 (more recent version) 
	SEC (Samsung)
		SEC KMM374S1623BTL-G0 128MB PC100 CL2
Memory boards with an unsupported size behave strangely. A pair of 64 MB boards is recognized by SRM as a pair of 16 MB boards. The memory is useful for VMS, all 32 MB of it.
One system, Osmium, is fitted with sixteen 256 MB boards. The memory size of that machine is 4 GB, the maximum it was designed for. The memory test (T24, when powered up) now takes 96 seconds.



Processors

Mixing processors that run at different speeds is not allowed according to the AlphaServer 1200 specification. It is not possible to put one 400 MHz and one 533 MHz processor board in the same chassis. Both processors would access memory at different frequencies and that won't work well together.
The Digital Server 5305 and the AlphaServer 1200 share many parts and configuration rules apply equally for both models.
So what happens if you put both a B3007-CA (AS 1200) and a B3107-CA (DS 5305) together in one system?
The quick answer is that irrespective of the position of the two boards, the system remains a Digital Server 5305. The last line printed by srm, immediately before it starts the operating system proper, shows the identification string for the processor. Which is what VMS reports back too and whether the B3007-CA was in the top cpu slot or in the bottom slot, in both cases the system remained sure it was a Digital Server.
The main reason for the experiment was that if it succeeded in making a dual-cpu AlphaServer 1200 it would have been possible to install Tru64 on a 5305 fitted with an extra AS1200 cpu. The Tru64 installation process fails on a 5305. The perception was also that B3007-CA boards are more common but also more expensive than the B3107-BA. One simple search (medio 2009) on eBay proved me wrong. Three of each were up for auction, and the prices were between $81 for a B3007-CA and $356.85 for the B3107-CA!



Firmware

January 2013 held a surprise: as of January 1st one of the Digital Server 5305 systems (Cesium) no longer booted automatically. VMS booted alright though it waited for a correct date and time to be entered manually. Initially this was diagnosed as a flat battery on the motherboard. However the system should have forgotten the boot instructions as well. Another cause for this behaviour is on dual boot systems, i.e. where VMS gets booted alternately with Tru64 or Linux. Now Cesium is a VMS only system so the cause had to be elsewhere. Then I noted that all Digital Server 5305's did the same thing, along with the AlphaServer 1200 models. What was going on here?
A thread on comp.os.vms provided an answer.
The thread is called Changed Boot behavior on XP1000 since 2013 and was started on 18 January 2013. The analysis showed that the time of year clock data was mangled in the firmware. From January 1, 2013 up to December 31, 2040 the date offered to VMS is way off and the operating system prompts the operator for a correct value.
Systems affected are:

      Alpha Server 1200 5/400
      Alpha Server 1200 5/533
      Digital Server 5305
      Compaq Professional Workstation XP1000
All models listed run at the latest firmware released for that mode. The solution for the problem is to add an entry in nvram:
      >>> edit nvram
      *10 deposit -b toy:9 0D
      ^Z
The value 0D in hexadecimal notation translates to 13 decimal. This value overrides the value in byte 9 of the TOY (Time Of Year) clock. Each new year this value ought to be changed, 0E for 2014, 0F for 2015, 10 for 2016, and so on. Note that this fix will not work on XP1000 systems. HP however has produced a fix; it is available for systems cvered by a maintenance contract. Hobbyists have no access to patches any more.
There are still XP1000 systems covered by a maintenace contract. So perhaps HP will fix the firmware and release a new version. Since HP is not an engineering company (not anymore) there won't be an internal drive to fix an obvious error in an existing, aging product. Luckily we know how to handle nvram....

up
next


Notes on CD images


dd - iso - nrg

It is useful to make a copy of an an original CD if only to make sure that the original remains in good cconditions. CD's were marketed as nearly indestructable way back in 1983, which they were compared to LP's. In the mean time we've learned to be a little more careful with the medium.
All procedures were performed with original CD's in my posession. They were obtained legally and the copies were made for my own use.
Having said that, it is my understanding that VMS distribution CD's may be copied and even may given away to others. The right to run the software contained on the CD's is yet another matter. You need a license to run VMS, its system integrated products or one of the layered products. Having a Paper Authorization Key is not necessarily the same as having a valid license. The rules for transferring software licenses when a computer gets sold are unclear to me. All I know is that the rules changed over the years and that transfer of ownership of software licenses is possible. Stay away from legal problems and do what all hobbyists do: get yer licenses from Montagar.
There are of course many ways to copy CD's, even when written with a format as foreign as Files-11 and ODS-2 on them. So this method is just another way of foing things, possibly new for some, more likely weird for others and perhaps new trivia for all. In the chapter the term CD means the physical disk and the word image refers to the copied conetnts collected in one single file on the disk of some system. The internal organisation of the image depends on the way the image was created. When the term CDrom drive is used feel free to think DVD-drive (or even Blu-Ray drive). Provided you have the software to write CD's on those devices there's really no difference with the procedure.
Burning CD's and DVD's is done here with Nero 6 on a Windows XP / HP Intel platform. Making images of VMS distribution kits is easy, just put a VMS (or Tru64) disk in the CDrom, select the desired recorder (Nero calls this the image writer) and burn the copy. Nero 6 writes those images to files with the .nrg filetype. When you want to burn a CD then Nero accepts just two filetypes for input: .iso and .nrg and since I have no intention of becoming a Nero expert I tend to use the .iso filetype.
So how do I obtain images with the .iso filetype? I figured out that Nero doesn't really care what is inside an .iso file, even though it is safe to assume that the .iso filetype is a standard (and very likely a standard of many rules and discussions!). On a unix platform, like one of the Tru64 V5.0 or Tru64 V5.1A systems in my collection, the dd command comes in handy:

      dd if= of=
  example:
      dd if=/dev/cdrom0c of=vaxvms073.dd
The filetype .dd is my own choice and is always a clear indication of how the file was created originally. The output of the dd command is very useful. These files may be copied to any other platform by means of ftp in binary mode. VMS, Windows and Linux systems had no difficulties in using these copied images.
First of all the .dd file may be copied to my Windows XP system and it may be used. unaltered for two purposes. A simulator like simh may use it to boot from and install VAX/VMS from. When the file is copied (or renamed, which is faster) to vaxvms073.iso then Nero happily burns it to a CD. This CD is readable and bootable on a VAX. Note that not all VMS versions have been distributed originally on a CD but on various kinds of tapes and movable disk media. None of them bootable, you needed another method to boot the hardware in order to install or upgrade the operating system.
Secondly, the .dd files may be used on a VMS system as a logical disk. The VMS LD command implements logical disks (and tapes). The associated device name is LD and with the VMS command LD you can manage devices and filenames.
      $ LD CONNECT VAXVMS073.DD
      %LD-I-UNIT, Allocated device is LDA1:
      $ MOU/OVER=ID LDA1:
  alternatively:
      $ MOU/FOREIGN LDA1:
  the volume label is displayed, so next:
      $ DISMOUNT LDA1:
      $ MOUNT/SYSTEM LDA1: VAXVMS073

up