360/370 disk drives 
Author Message
 360/370 disk drives



(snip)

Quote:
>There was a certain pattern of characters that would cause all 132
>print hammers to fire during one full buffer-scan. If you hit the
>printer with this same pattern on about a half-dozen successive
>print lines (with a print-no-space command), it would severely
>stress the mechanism, and would probably break the chain on a
chain
>printer.

As far as I know, the spacing between the hammers is different than
the spacing between characters on the chain/train so that no
pattern fires all at the same time.  Among other reasons, this
allows sharing of the logic to decide which hammer to fire.   There
would still be patterns that would fire many in close succession,
though.

-- glen



Wed, 13 Jul 2005 03:36:52 GMT  
 360/370 disk drives

Quote:


>>There was a certain pattern of characters that would cause all 132
>>print hammers to fire during one full buffer-scan. If you hit the
>>printer with this same pattern on about a half-dozen successive
>>print lines (with a print-no-space command), it would severely
>>stress the mechanism, and would probably break the chain on a
>>chain printer.
>As far as I know, the spacing between the hammers is different than
>the spacing between characters on the chain/train so that no
>pattern fires all at the same time.  Among other reasons, this
>allows sharing of the logic to decide which hammer to fire.   There
>would still be patterns that would fire many in close succession,
>though.

For the 1403, with 132 print positions, that would have to be the case.

Consider that a 1416 train (or whatever the machine time was for the
earlier print chain) has a total of 240 print positions.  That means
that at any instant only 120 characters are oriented towards the
front of the cartridge, and even if you ignore the fact that a couple
of slugs are still on the curved part of the path at each end, there
is no way for there to be a glyph opposite every print hammer at
the same time.

I don't know the exact glyph spacing (and never did) but I would
guess it to be about 1/8", vs. the 1/10" spacing of the hammers.

Joe Morris



Wed, 13 Jul 2005 05:15:19 GMT  
 360/370 disk drives

Quote:

> By the way, just before I left IBM, I understand that there was a
> new type of dataset in MVS called a PDSE (like PDS "Enhanced" or
> some such). I never got a chance to play with them because the
> project I was working on got cancelled, and the system programmer
> hadn't installed the necessary software to make PDSE's work.

> So I don't know if they made PDSE's compatible with the old-style
> PDS, or if you could use a PDSE to hold any of the old SYS1...
> datasets. Know anything about that?

We don't use 'em, but from what I know they're API compatible with PDS's
(i.e BLDL and STOW still work the same).  I believe they're implemented
as some sort of VSAM thingy.  They have a lot of advantages over PDS's:
I believe no compression necessary, member-level ENQ's (there's a PDSE
address space that handles all accesses), long alias names needed for C
functions.  The system still comes on PDS's.


Wed, 13 Jul 2005 06:01:09 GMT  
 360/370 disk drives

Quote:

> There was a VSAM KSDS business with the high level index that
> replicated the index record around a full track that used this
> "trick."  I wish I could remember what they called it.

Duh, "REPLICATE".


Wed, 13 Jul 2005 06:03:46 GMT  
 360/370 disk drives


Quote:
> It could make a big difference for sequential reads.  Sometime around
> 1985 or 1986 I used this little trick to measure the time from
> interrupt to redrive on MVS/370 and 3330s.  I found it usually took about
> 25% of a rotation!  So, instead of taking 2 revolutions to sequentially
> read a full track, because you always lost a full rotation between
> the 25% lost rotation plus the remainder of the rotation waiting for
> R1 to come around again, it took about 1 1/4 rotations.

... start digression ...

so one of the things that i had done in the early '70s on cp/67 was a
number of memory mapped functions. this (with the help of two BU
work/study students) I ported to vm/370 and extended.

this was sort of follow-on to the CMS changes I had done as an
undergraduate to optimize the cp kernel pathlength supporting CMS dasd
operations. this was translated into "diagnose i/O" (in large part at
the insistance of bob adair):
http://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
http://www.garlic.com/~lynn/2003.html#60 MIDAS

low-level cms filesystem functions were translated to the memory
mapped api. this allowed that cms could continue to use buffer
semantics paradigm (translated to page-mapped buffers) or do full
laid-out memory mapping (say like in program loading)
http://www.garlic.com/~lynn/subtopic.html#mmap

even with applications that continued to use buffer paradigm
semantics, there was still some thruput improvement because the
"diagnose i/o" paradigm still had the flavor of real i/o operations
.... requiring the virtual->real ccw translation (although
significantly lower pathlength than the generalized SIO ccw
translation) as well as virtual page prefixing overhead prior to
scheduling the real i/o. because the filesystem was dealing directly
with a page mapped api (even when using buffered i/o semantics), a lot
of the page prefixing/unfixing, virtual->real, and other overhead was
eliminated.

furthermore given that it was a page mapped semantics ... things like
program loading could include options as to whether the mapping
allowed sharing of segments across multiple different virtual address
spaces.

for vm/370 release 3 ... a subset of the cp kernal changes were
incorporated as "discontiguous shared segments" ... and the all
non-filesystem CMS changes were pretty much all incorporated (rewrite
of various applications to reside in r/o shared storage). random
stuff:
http://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2002o.html#25 Early computer games

the full memory map support included the capability that anybody could
create and generate executable code and/or data that was definable as
shareable ...  both from dasd resident standpoint as well as memory
resident standpoint .... w/o requiring any system privileges.

hone was one of the internal sites that made extensive use of the
support ... in part because they were being really stressed to provide
lots of advanced feature support to all the branch & field people in
the world:
http://www.garlic.com/~lynn/subtopic.html#hone

having created the memory-mapped api abstraction ... then underneath
the api, the kernel could do all sorts of local & dynamic adaptive
implementation tricks .... if things were heavily real storage
constrained ...  the mapping didn't have to perform any operation
other than updating the virtual memory tables (whereas the real i/o
simulation code had to prefetch & lock affected virtual pages,
translate virtual->real i/o, schedule and perform real i/o, unfix and
possibly remove from storage the virtual pages involved). under zero
real storage constraints the implementation could switch to start
prefetch of all indicated page records and immediately return to the
process starting execution. the asyncronicity semantics would be
handled (transparently) by virtual memory infrastructure as to whether
a page was available or not at the moment it was needed.

.... suspend digression ...

so one demonstration was to have an application executable file out on
3330 that occupried a full 3330 cylinder ... and be able (in no
contention scenearios) to fetch the complete file in 19 revolutions
(this also required a new low level cms filesystem function that would
attempt to perform contiguous allocation ... and modifying the program
executable file generation code to specify that contiguous allocation
was desired). later was also able to demonstrate similar capability on
3380 cylinder (complete cylinder of data transfer w/o loss of
revolution). the implementation could dynamically adapt to sub full
track transfer, sequence of single full-track transfers, multiple
full-track transfers (up to full cylinder).

.... resume digression ...

so another application of this was in the cp kernel spool file system
support. this was motivated by a number of things ... but one issue
was in hsdt
http://www.garlic.com/~lynn/subtopic.html#hsdt

the use of the spool file system by the networking support. a basic
problem was that the networking address space was serialized at every
4k worth of data transfer (read or written).

so one idea was to eliminate this serialization ... providing the
effect of both read ahead and write behind. also for data resident in
the spool file system that was being pushed out a network connection
...  doing it in units of 4k bytes ... where the networking support
code could use asyncronous interface to provide the mapping of virtual
storage to disk locations ... and then let the virtual->real i/o
translation handle the serialization without serilizing the network
application code. Trick was to have the networking code do a memmory
mapped API operation with non-serialization and then use a SIOF (start
i/o fast) instruction to initiate the network i/o operation (aka the
address space would get immediate return while allowing the
virtual->real translation to proceed asyncronously).

that was just one of the identified short-comings .... if a major
effort to cleanup the spool file system was going to be undertaken,
might as well look at other issues:

1) assembler code in the kernel. redo it in Pascal/vs resident in a
virtual address space.

2) linear lists of spool files. for large installations this was a cpu
killer. in the pascal/vs implementation replace the linear lists with
red/black tree implementation ... this change more than compensated
for the overhead of moving the code from assembler in the kernel to
pascal/vs in virtual address space

3) leverage the memmory mapped api for dasd interface

4) the general enabling of system function didn't happen at boot/ipl
until after the spool file system was initialized. moving the function
to virtual address space required decoupling the tight system startup
from spool file system initialization

5) if the system was shutdown/crashed cleanly (aka a system crash
would atteempt to perfom at least a warm start save of the spool file
information), the spool file system could come up quickly with warm
start saved data (which also met that the system came back up
quickly). if the system failed in non-clean mode (like power loss),
checkpoint start needed to be performed ... which could easily take
30-60 minutes on a large configuration. decoupling (#4) removed spool
file initialization from critical path of general system availability
restart.

.... suspend digression ...

one demonstration was to show redesign of the checkpoint restart
implementation using the extended memory mapped API ... reading 3380
cylinder of data with no loss in dasd revolution. this resulted in
typical worst case checkpoint recovery of a couple minutes (down from
sixty) which also was decoupled from general availability of the
system.

the other demonstration was to support contiguous allocation for new
(large) spoolfiles (with multi-block write behind) ... greatly
improving creation elapsed time as well as subsequent retrieval (with
multi-block read)

... resume digresionn ...

as a teaser ... i implemented a spoolfile<->tape application (in
pascal/vs) that could be run on any unmodified vm/370 system. This
used lots of spool file pascal/vs library code that i had written for
the full project. basically given r/o access to the full-pack dasd
areas contain the (kernel) spoolfile data .... it would perform a
spoolfile->tape backup (which resulted in same format tape as the
kernel spoolfile->tape command would generate). It also supported
tape->spoolfile effectively using the (standard system) spoolfile
diagnose interface.

past sfs posts:
http://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan

... end post ...

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Wed, 13 Jul 2005 06:28:55 GMT  
 360/370 disk drives

Quote:

> I can't remember who all worked on getting us set up with the
> HyperChannel thing. I was working on a RETAIN re-architecture
> project at the time. Our terminal connection work was being done
> by folks in the Operations function. I hadn't realized you'd been
> involved.

do you recognize:
bldfe2(b580556), tie 646-2373/646-2361
later
kgnvm5(vmqr), tie 373-1894

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm



Wed, 13 Jul 2005 06:54:44 GMT  
 360/370 disk drives

Quote:
> >As far as I know, the spacing between the hammers is different
than
> >the spacing between characters on the chain/train so that no
> >pattern fires all at the same time.  Among other reasons, this
> >allows sharing of the logic to decide which hammer to fire.
There
> >would still be patterns that would fire many in close
succession,
> >though.

I guess you guys didn't read my post very carefully. I said that
all the hammers would fire during the "same print scan", which
means one full cycle of comparing all the print positions to the
next-available type slug for each (a different slug for each, in
most cases.) Some of the slugs actually did get hit by two
adjacent hammers.  The hammers didn't fire simultaneously, but
very nearly so, only 5 microseconds between successive firings.
For all practical purposes, it close enough to slam the chain or
train pretty hard, and would slow it down.

Quote:
> For the 1403, with 132 print positions, that would have to be
the case.

> Consider that a 1416 train (or whatever the machine time was
for the
> earlier print chain) has a total of 240 print positions.  That
means
> that at any instant only 120 characters are oriented towards
the
> front of the cartridge, and even if you ignore the fact that a
couple
> of slugs are still on the curved part of the path at each end,
there
> is no way for there to be a glyph opposite every print hammer
at
> the same time.

> I don't know the exact glyph spacing (and never did) but I
would
> guess it to be about 1/8", vs. the 1/10" spacing of the

hammers.

Exactly 0.1505 of an inch.

During a print scan, the printer was checking every THIRD print
position with every SECOND glyph on the train. Hammers three
positions apart were spaced at .300". Glyphs spaced two apart
were at intervals of .301". In five microseconds, the train moved
.001 inch, which is why the spacing was .1505" instead of .1500".
It was to make up for the continuous movement of the train.

A full print-scan consisted of three "subscans". The first was
for positions
1,4,7,10,...,127,130 the next for 2,5,8,11,...,128,131, and the
third for
3,5,9,12,...,129,132. Each subscan took 220 microseconds, during
which time the train moved .044". When the train moved another
.0065" the next subscan would begin. After three subscans, each
print position has been checked against one of the character
codes and "optioned" to print (if the codes matched). Then the
cycle would repeat, because a new glyph was coming into position
for each of the hammers. (Not directly in front, of course, the
hammer firing was timed to put in on a "collision course" with
the glyph.) If the spacing between the hammer and the magnet
armature was wrong, it would be off, and the character would be
displaced to the right (hit too early), or to the left (hit too
late). That's why they were adjustable.  The hammers, by the way,
are behind the paper, and hit the page from behind. The slugs
don't move toward the paper, in case you didn't know that.

The spacing of the glyphs relative to the hammers provided just
enough time to check them all as the train whizzed by at 200
in./sec. The first time the right glyph moved in front of each
hammer, it was tested and fired at just the right instant.

The brilliant choreography of the whole printing cycle always
impressed me.

The timing of each subscan, by the way, was determined by a
magnetic emitter attached to the drive shaft that drove the
train. This emitter was timed to synchronize with the train.



Wed, 13 Jul 2005 08:52:53 GMT  
 360/370 disk drives



Quote:
> do you recognize:
> bldfe2(b580556), tie 646-2373/646-2361
> later
> kgnvm5(vmqr), tie 373-1894

That was certainly in the format of userid's at our site.
I was bldfe2(b344736). Don't recall my phone number... I had
so many in those days. Did you know Jeff Hill?


Wed, 13 Jul 2005 08:58:48 GMT  
 
 [ 373 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software