Card Columns 
Author Message
 Card Columns

There was a VSAM KSDS business with the high level index that
replicated the index record around a full track that used this
"trick."  I wish I could remember what they called it.

- Steve Myers


Quote:
>For example, around 1974, I discovered a trick for reading an
>entire trackful of fixed-length records with almost no rotational
>latency. This was done with a channel program consisting of a set
>of chained "ReadCKD" commands equal in number to the number of
>records/track, and *no* SearchID command and TIC at all. This
>would read the entire track, starting with the first block to
>pass under the head, regardless of which one it was. The program
>then sorted out which block was which by examining the record
>numbers in the input buffers. I could read a sequential file
>nearly twice as fast that way. This technique, combined with
>other tricks I won't go into here, greatly accelerated start-up
>of the RETAIN "database"(1) when the system had to be restarted.

>(1) ...in quotes because it wasn't a "real" DBMS, just another
>one of our "hand-crafted" RETAIN-specific technologies.



Tue, 12 Jul 2005 10:12:03 GMT  
 Card Columns

Quote:

> In OS/360, the system API provided access to a table of these
> parameters for any given disk drive, called the Device
> Characteristics Table.

However, later devices required a more complex algorithm (data was
recorded in multi-byte chunks, so that numbers had to be rounded up to
the next chunk boundary), which was implemented as an API, so that it
wouldn't become obsolete again.

--
John W. Kennedy
"The poor have sometimes objected to being governed badly;
the rich have always objected to being governed at all."
   -- G. K. Chesterton, "The Man Who Was Thursday"



Tue, 12 Jul 2005 12:34:41 GMT  
 Card Columns

Quote:

> So I don't know if they made PDSE's compatible with the old-style
> PDS, or if you could use a PDSE to hold any of the old SYS1...
> datasets. Know anything about that?

PDSE's were implemented in VSAM.  BPAM was given a thorough emulator
that worked for all applications coded with documented BPAM API's.  One
new function was added, an option for STOW that emptied a PDSE,
restoring it to its {*filter*} state.  A good thing, because I had written
a program to do that for PDS's, and it wouldn't have worked on PDSE's --
but all I had to do was add a test to see if it was running on a PDSE,
and if it was, call the new STOW macro instead of my original kludge.

Load modules couldn't be put in PDSE's at first.  That waited a couple
of years until the new link-editor was created, which added all sorts
of new functions, like ESD's longer than 8 bytes, and optional case
sensitivity.

--
John W. Kennedy
"The poor have sometimes objected to being governed badly;
the rich have always objected to being governed at all."
   -- G. K. Chesterton, "The Man Who Was Thursday"



Tue, 12 Jul 2005 12:34:43 GMT  
 Card Columns

Quote:


>> So I don't know if they made PDSE's compatible with the old-style
>> PDS, or if you could use a PDSE to hold any of the old SYS1...
>> datasets. Know anything about that?

> PDSE's were implemented in VSAM.  BPAM was given a thorough emulator
> that worked for all applications coded with documented BPAM API's.  One
> new function was added, an option for STOW that emptied a PDSE,
> restoring it to its {*filter*} state.  A good thing, because I had written
> a program to do that for PDS's, and it wouldn't have worked on PDSE's
> -- but all I had to do was add a test to see if it was running on a
> PDSE, and if it was, call the new STOW macro instead of my original
> kludge.

> Load modules couldn't be put in PDSE's at first.  That waited a couple
> of years until the new link-editor was created, which added all sorts
> of new functions, like ESD's longer than 8 bytes, and optional case
> sensitivity.

It seems to me that PDSE's have a performance problem, especially when
being accessed
from different systems at the same time. This may only be an issue for
PDSE's that were
frequently updated, problem being the necessary cross-system enqueues.
Someone told
me that it was also a problem when VSAM control-interval splits were
necessary.

My last shop discourged usage for PDSE's in a number of situations. If I
remember
correctly, they were great for control card and read-only table
applications. We
made it a particular point to suggest that they not be used for LOAD
LIBRARIES.
I don't remember exactly why though. Applications that ignored our
"expert's" opinion,
almost always ran into severe peformance problems under high volume
situations.
It appeared to me that the best thing about a PDSE was that it never had
to be
compressed - and as a result I ignored them completely.

/s/ Bill Turner, wb4alm



Tue, 12 Jul 2005 23:56:48 GMT  
 Card Columns

Quote:

> True. The design is representative of the tendency in those days
> toward *totally* unbuffered I/O.  The Search-ID CCW operation was
> probably the most extreme example of this imaginable.  You'd have
> thought that they could have added a dinky little 5-byte register to
> hold the ID for comparison, but instead required all that extra
> channel overhead to keep transmitting the same little string of
> bytes over the cable over and over again.  That one seemed silly to
> me right from the start.

i did the remote device adapter HYPERchannel support for STL ... as
part of remoting 300 people from bldg.90/stl to bldg.96. Basically,
the A510 emulated a mainframe channel and allowed attachments of
normal control units. you captured the CCW string, downloaded it to
the memory of the A510 and activated it running locally from the
memory of the A510 (however data arguments would be forwarded over the
HYPERchannel network back to an A220 and then to mainframe real
memory).  This worked for almost all controllers/devices except for
CKD because of the severe latency constraints associated with the
search argument. For STL, there were two "local" HYPERChannel networks
in bldg.90 and bldg.96 .... interconnected by T1 (private microwave)
link. The single T1 link carried all the 3270 "local" channel traffic
plus misc. other local channel devices with response indistinquishable
for the 300 terminals when they were really locally attached in bldg.
90 (and significantly better than any of the various networked 3274
options). It was actually slightly better since overall system thruput
for dasd got better. The 3274 display controllers actually had fairly
high channel busy times compared to dasd controllers that might move
the same amount of data. Remoting the 3274s at the end of HYPERchannel
network then caused all the 3274 data traffic to be driven thru a
locally attached HYPERChannel A220. The channel busy characteristics
of A220 was much closer to DASD controller (per byte moved) than
3274s.  This freed up additional channel capacity for DASD activity
.... and the overall system thruput increased (by 10-15 percent by
moving all the 3274s to a HYPERChannel network) which translated into
improved system response.

I then worked with one of the RETAIN people in Colorado on a similar
implementation when a large number of people were moved to a building
across the highway. In this case the T1 link was provided by a private
infrared modem mounted on the roofs of the two buildings. There was
some concern about transmission quality during rain, fog, or heavy
snow .... however the story was that the only time the link saw any
noticable bit error rate was during a white-out blizzard when nobody
was able to get in to work. The other story was about keeping the
infrared modems aligned during the day when the heating of the sun
caused one side of the buildings to get taller and tilt the pole that
the modem was attached to.

A similar design was implemented by NCAR (up the road) ... which was
basically a software MSS implementation .... a 4341 provided control
of staging data between tape<->disk and they got an upgraded A515
remote device adapter which allowed for packing off both CCWs and
search arguments into the memory of the A515 (in order to support CKD
disks). They could use dasd controllers that had a "real" channel
interface to the 4341 and a separate channel interface to the
A515s. The 4341 acting sort of like a logical 3850 MSS controller
doing the staging to/from disk. Then lots of other processors (like
non-ibm, crays, etc) could directly access the (CKD) DASD. There was
some extra function put into the A515 so the 4341 could load DASD ccw
strings along with seek/search arguments and permissions. The drivers
on the processors that wanted to directly read/write the data just had
to refer to the appropriate CCW "package" in the memory of the A515.

Similar, but different implementations were going on at LANL, LLNL,
and nasa/ames ... all involving hyperchannel in one way or another.
When we were doing HA/CMP were were interacting with just about all of
the groups, the IEEE MSS standards activity ... and even provided some
of the commercializing efforts funding (and lots of visits) ... aka
LANL->Datatree, LLNL->Unitree, NCAR->Mesa Archival.

a little more thread drift:

The NCAR design also gave rise to the IEEE support for 3rd party
transfers in HiPPI and IPI disk support.

the guy in retain that i worked with in colorado transferred to
kingston in the mid-80s and worked in the high performance computing
lab ...  hooking up a bunch of FPS boxes with a 3090 ... random refs:
http://www.*-*-*.com/ ~lynn/2000c.html#5 TF-1
http://www.*-*-*.com/ ~lynn/2000c.html#61 TF-1
http://www.*-*-*.com/ ~lynn/2000e.html#20 Is Al{*filter*}The Father of the Internet?^
http://www.*-*-*.com/ ~lynn/2001m.html#25 ESCON Data Transfer Rate
http://www.*-*-*.com/ ~lynn/2002j.html#30 Weird

and our HSDT project paid for a T1 connection between that lab and the
west coast ... driven by HYPERChannel boxes and rfc1044 support that I
had written.

random hsdt stuff (including hyperchannel and rfc1044 support):
http://www.*-*-*.com/ ~lynn/subnetwork.html#hsdt

random ha/cmp stuff:
http://www.*-*-*.com/ ~lynn/subtopic.html#hacmp

misc. ncar, unitree, datatree, mesa mentions:
http://www.*-*-*.com/ ~lynn/99.html#146 Dispute about Internet's origins
http://www.*-*-*.com/ ~lynn/2000c.html#78 Free RT monitors/keyboards
http://www.*-*-*.com/ ~lynn/2001.html#21 Disk caching and file systems.  Disk history...people forget
http://www.*-*-*.com/ ~lynn/2001.html#22 Disk caching and file systems.  Disk history...people forget
http://www.*-*-*.com/ ~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.*-*-*.com/ ~lynn/2001f.html#23 MERT Operating System & Microkernels
http://www.*-*-*.com/ ~lynn/2001f.html#66 commodity storage servers
http://www.*-*-*.com/ ~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.*-*-*.com/ ~lynn/2001l.html#34 Processor Modes
http://www.*-*-*.com/ ~lynn/2002.html#10 index searching
http://www.*-*-*.com/ ~lynn/2002e.html#46 What goes into a 3090?
http://www.*-*-*.com/ ~lynn/2002f.html#8 Is AMD doing an Intel?
http://www.*-*-*.com/ ~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
http://www.*-*-*.com/ ~lynn/2002k.html#31 general networking is: DEC eNet: was Vnet : Unbelievable

--
Anne & Lynn Wheeler | http://www.*-*-*.com/ ~lynn/
Internet trivia 20th anv http://www.*-*-*.com/ ~lynn/rfcietff.htm



Wed, 13 Jul 2005 00:20:56 GMT  
 Card Columns



Quote:
>As originally shipped, you needed to boot a "Compatibility
>Initialization Deck", which was shipped as part of the hardware
>feature.

--
     Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited commercial junk E-mail will be subject to legal
action.  I reserve the right to publicly post or ridicule any
abusive E-mail.

I mangled my E-mail address to foil automated spammers; reply to
domain Patriot dot net user shmuel+news to contact me.  Do not



Wed, 13 Jul 2005 02:09:57 GMT  
 Card Columns


Quote:
>I was no longer working in the field when the 3850 Mass Storage
>System, with it's honeycomb array of phallic-shaped tape cartridges,
>appeared. There must have been some really choice names for *that*
>one among CE's.  Anybody heard about that?

I don't know what the IBM employees called it among themselves, but
they were not amused when I persistently referred to the MSS as "the
new data cell".

--
     Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited commercial junk E-mail will be subject to legal
action.  I reserve the right to publicly post or ridicule any
abusive E-mail.

I mangled my E-mail address to foil automated spammers; reply to
domain Patriot dot net user shmuel+news to contact me.  Do not



Wed, 13 Jul 2005 02:03:19 GMT  
 Card Columns


Quote:
>The full EBCDIC character set, containing all the characters used in
>the PL/I language, consisted of 60 glyphs.

That wasn't the full EBCDIC character set.

Quote:
>There was a certain pattern of characters that would cause all 132
>print hammers to fire during one full buffer-scan. If you hit the
>printer with this same pattern on about a half-dozen successive
>print lines (with a print-no-space command), it would severely
>stress the mechanism, and would probably break the chain on a chain
>printer.

Not on our machine it didn't, although I've heard of the chain
breaking at other sites. Fairly easy to construct using the
information in 1401 Data Flow. Printing that line caused a rather
interesting sound, and seemed to never actually print the same line
twice.

Quote:
>It was kept a secret
>from college students whenever possible.

How? The manual was general availability.

--
     Shmuel (Seymour J.) Metz, SysProg and JOAT

Any unsolicited commercial junk E-mail will be subject to legal
action.  I reserve the right to publicly post or ridicule any
abusive E-mail.

I mangled my E-mail address to foil automated spammers; reply to
domain Patriot dot net user shmuel+news to contact me.  Do not



Wed, 13 Jul 2005 01:57:56 GMT  
 360/370 disk drives



Quote:
> I then worked with one of the RETAIN people in Colorado on a
similar
> implementation when a large number of people were moved to a
building
> across the highway. In this case the T1 link was provided by a
private
> infrared modem mounted on the roofs of the two buildings. There
was
> some concern about transmission quality during rain, fog, or
heavy
> snow .... however the story was that the only time the link saw
any
> noticable bit error rate was during a white-out blizzard when
nobody
> was able to get in to work. The other story was about keeping
the
> infrared modems aligned during the day when the heating of the
sun
> caused one side of the buildings to get taller and tilt the
pole that
> the modem was attached to.

Yes, I was one of the people who had moved to the other building,
so my 3270 connection was via that HyperChannel hookup. The main
computer center was on 28th Street and our group of developers
was on 30th. The infrared link was the best way to get a
high-speed connection between the two buildings.  As you say, the
thing worked surprisingly well, even on rainy days, and in
moderately heavy snowstorms.

We moved out of both of those buildings sometime around '85 or
'86, and finally got into the plant site outside of Boulder. The
building that housed the computer center is now a strip mall, and
the other development lab became the Boulder School of Massage
Therapy. I believe the "tilting pole" to which the IR transceiver
was attached is still there. I'm currently working about a mile
from there.

I can't remember who all worked on getting us set up with the
HyperChannel thing. I was working on a RETAIN re-architecture
project at the time. Our terminal connection work was being done
by folks in the Operations function. I hadn't realized you'd been
involved.



Wed, 13 Jul 2005 03:10:50 GMT  
 360/370 disk drives

Quote:
> Similar, but different implementations were going on at LANL, LLNL,
> and nasa/ames ... all involving hyperchannel in one way or another.
> When we were doing HA/CMP were were interacting with just about all of
> the groups, the IEEE MSS standards activity ... and even provided some
> of the commercializing efforts funding (and lots of visits) ... aka
> LANL->Datatree, LLNL->Unitree, NCAR->Mesa Archival.

this was also related to the 3-tier architecture that we proposed ...
which caused a lot of heartburn among the saa crowd.
http://www.garlic.com/~lynn/20003.html#45 IBM's Workplace OS (Was: .. Pink)

also
http://www.garlic.com/~lynn/subtopic.html#3tier

some of the characteristics can be seen today in all the SAN stuff.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Wed, 13 Jul 2005 03:23:29 GMT  
 360/370 disk drives

Quote:

> I can't remember who all worked on getting us set up with the
> HyperChannel thing. I was working on a RETAIN re-architecture
> project at the time. Our terminal connection work was being done
> by folks in the Operations function. I hadn't realized you'd been
> involved.

I wrote all the software support and did a lot of debugging at the STL
location before it was cloned in boulder.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Wed, 13 Jul 2005 03:25:41 GMT  
 
 [ 373 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software