ASM370 and COBOL folks - where they are... 
Author Message
 ASM370 and COBOL folks - where they are...

I agree with you Thane, Steve has been an asset to the Y2K group.

Eduardo Garcia
Austin, Texas

Quote:

>WOAH there Richard.  Steve the Recruiter cannot be described as a SPAMMER.
>He has contributed in great measure to this news group in DISCUSSION.  He
>actually READS the news group and makes good intelligent comments.  





>> > : > Steve the Recruiter interupts...
>> > : > Help me understand something. I advertise (some would say spam) for
>> > : > Cobol programmers on all the employment newsgroups. In the last 3
>months
>> > : > I've only gotten ONE person who was interested in programming Cobol
>and

>> > Simple supply and demand.
>> > Your bid price is not greater than their interest price.
>> > Try bidding a number or a higher number.

>> Did it ever occur to you that no one wants to work for a spammer?
>> --
>>                ^^^^^
>>               ( o o )        
>> =========o000===(_)===000o=========
>> http://www.*-*-*.com/ ~davicomp



Wed, 06 Oct 1999 03:00:00 GMT  
 ASM370 and COBOL folks - where they are...

Quote:



> > > American Stores has been running ads from everything from systems
> > > programmers to programmer/analyst.  They're trying to swipe anyone they
> > > can.  No salaries listed.  They have lured people from state government

> > Aha!  This may explain the recent increase in the *.jobs and *.ads
> > groups from Utah.  I was wondering what companies up there are in need
> > of mainframe skills...

> > Regards,
> > Doug McKibbin

> Unfortunately, most of the ads in the local newspaper are for
> non-mainframe/Cobol positions.  For Utah's size, there has always been a
> fairly large mainframe-based usage here.  There must be a lot of
> placement through private agencies or work through contractors these
> days.  I have heard of individuals being gobbled up by banks.

> Mike Dodas

Could you tell me which local papers are listing for non-mainframe/Cobol
positions.  I am very interested in this area.

Charles W.



Wed, 06 Oct 1999 03:00:00 GMT  
 ASM370 and COBOL folks - where they are...


 >There are limits on how far the read-ahead will go, so sometimes the
 >pool is not filled completely. The most immediate problems in this area
 >are CI and CA splits. The read-ahead will not traverse a split CI, thus
 >requiring another physical I/O to complete the reading of the CA. Also,
 >the read-ahead will not cross a CA boundary, so physical reads that
 >start part way through a CA will only read to the CA boundary.

Is this to say that read-ahead will not function when CIs are physically
"out of order"? And are the reads done by assembling a large channel
program, or are they simply scheduled asynchronously and indenpendently?

 >By default, every PUT to a LSR pool produces a physical I/O, even if
 >the CI is to be updated again. There is an option to defer physical
 >writes until the buffer is to be stolen or the ACB closed. This can
 >reduce the number of EXCP's required by random update programs.

Will RTM ask VSAM to flush the buffers should a program exception occur?

 >For pre{*filter*}ly sequential processing NSR buffering is preferred.
 >For pre{*filter*}ly direct processing LSR buffering is preferred.

Does VSAM select a buffering technique based on the access types (as I
recall) specified in the ACB?



Wed, 06 Oct 1999 03:00:00 GMT  
 ASM370 and COBOL folks - where they are...

Quote:

>positioning. The NSR buffer management also performs no lookasides on =
the
>index buffers, which causes index reads even when the index CI's might
>already be in the resource pool.

NSR buffer pool management (MVS; I don't know about VSE) reserves one
buffer for the high level index CI when extra index buffers are =
available.
NSR also does lookasides for "mid level" index CIs (again, when extra =
index
buffers are allocated), but never for the sequence set CI.

--
Michael Quinlan

http://www.primenet.com/~mikeq



Wed, 06 Oct 1999 03:00:00 GMT  
 ASM370 and COBOL folks - where they are...

On Sunday, 97/04/20, Verne Arase wrote to All about "Vsam I/O
Performance" as follows:

VA>  >There are limits on how far the read-ahead will go, so sometimes the
VA>  >pool is not filled completely. The most immediate problems in this area
VA>  >are CI and CA splits. The read-ahead will not traverse a split CI, thus
VA>  >requiring another physical I/O to complete the reading of the CA. Also,
VA>  >the read-ahead will not cross a CA boundary, so physical reads that
VA>  >start part way through a CA will only read to the CA boundary.
VA>
VA> Is this to say that read-ahead will not function when CIs are
VA> physically "out of order"?

Basically, yes (or perhaps no). The CI's will appear in physical
sequence matching key sequence, within a CA, when no splits have
occurred. I'm not sure why IBM chose to return to the sequence set
whenever a split CI is encountered, but that's what happens. I don't
think it is a function of physical placement on DASD, but the
possibility of a CA split also having occurred might require a change
of seqeunce set CI.

VA> And are the reads done by assembling a
VA> large channel program, or are they simply scheduled asynchronously
VA> and indenpendently?

For a single RPL it is one long channel program.

VA>  >By default, every PUT to a LSR pool produces a physical I/O, even if
VA>  >the CI is to be updated again. There is an option to defer physical
VA>  >writes until the buffer is to be stolen or the ACB closed. This can
VA>  >reduce the number of EXCP's required by random update programs.
VA>
VA> Will RTM ask VSAM to flush the buffers should a program exception
VA> occur?

Yes, if the ACB is still around. It does a SVC 20 on it.

VA>  >For pre{*filter*}ly sequential processing NSR buffering is preferred.
VA>  >For pre{*filter*}ly direct processing LSR buffering is preferred.
VA>
VA> Does VSAM select a buffering technique based on the access types
VA> (as I recall) specified in the ACB?

No. You have to code it in the ACB yourself. [Or use BLSR, or some
other buffering tool.] The default is NSR.

Regards

Dave
<Team PL/I>

 * KWQ/2 1.2i * Chuck & Di - What you do after a bad sausage.
--
Please remove the '$' in the from line before reply via email.
Anti-UCE filter in operation.



Thu, 07 Oct 1999 03:00:00 GMT  
 ASM370 and COBOL folks - where they are...

With respect to the discussion of using VSAM and getting good
performance:

I would like to ask a general question, since some parts of our
application inventory use VSAM, and we may have to touch them for Y2K
repair purposes, or even take over their long term maintenance.

I have little experience with VSAM, except that I tried it out from
COBOL a few years ago.  We were trying to figure out whether to use an
IMS D/B, DB2, VSAM, or Sequential Data Sets for storing a LARGE amount
of financial data.  We eventually settled on QSAM Variable length
records, and time has appeared to show this was the correct (low-cost)
choice.

My "study" of VSAM involved observing EXCP counts (which was how we
billed) when processing file actions, especially updates and insertions,
since the application would have :
1. inception to date records which would get updated monthly,
2. month to date records which would get created at the start of the
month, then updated weekly (last month's records would be dropped at the
start of a new month), and
3. weekly records, which would be inserted each week, while the previous
week's records were deleted.

This study did not take into account how the data might be used outside
of the "master file update" process.  However, over time, it appears
that
1. the full set of data is used for reporting purposes, and
2. selected subsets of the data are used for ad-hoc purposes.

Using COBOL I/O statements resulted in huge EXCP counts, indicating that
blocks (CI's?) were being written and reread for every insert.  My
reading told me I could write in Assembler and avoid such behavior, and
reduce I/O (EXCP counts) by as much as 90%+.  (I did not code any
routines to prove this, however.)

Finally, the question:
Do most shops which use VSAM heavily write Assembler VSAM I/O modules?
Should they?  Can this sort of routine be generalized, or must it be
specific?  Do you know of any sources for more detailed information on
this subject?

TIA,
Colin Campbell



Fri, 08 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:




> >> > American Stores has been running ads from everything from systems
> >> > programmers to programmer/analyst.  They're trying to swipe anyone they
> >> > can.  No salaries listed.  They have lured people from state government

> >> Aha!  This may explain the recent increase in the *.jobs and *.ads
> >> groups from Utah.  I was wondering what companies up there are in need
> >> of mainframe skills...

> >> Regards,
> >> Doug McKibbin

> >Unfortunately, most of the ads in the local newspaper are for
> >non-mainframe/Cobol positions.  For Utah's size, there has always been a
> >fairly large mainframe-based usage here.  There must be a lot of
> >placement through private agencies or work through contractors these
> >days.  I have heard of individuals being gobbled up by banks.

> >Mike Dodas

> 'non-mainframe/Cobol positions' there is a reason for that.

> If companies fix their legacy products they will get nothing in return
> and all the expenses will be lost. Many IT shops have been sold in the
> idea of moving to high technology in order to get something back, so
> they are implementing client/server to get rid of the mainframe. Its
> not a bad idea but is this the right time?

> You can see many ads requesting Sybase, Powerbuilder, Visual Basic,
> Oracle, Access, etc. to fill this positions.

> Going back to moving to c/s. It is too late to think about doing the
> move. If users are not in the middle of the effort, chances are they
> are not going to make it. It is a risk they are taking.

> They are changing applications that have been customized to the
> company needs through the years for packages that will not do all
> their functions. They are going from one change control to several
> isolated units including individual hard disks. Instead of testing
> date related applications, they have to test the entire system. The
> list can go on but is the decision taken by those senior executives
> that denied the year 2000 for a long time.

> This move will take longer than they think and some will end up with
> a system that doesn't work and a mainframe that is useless. Isn't the
> $4 billion IRS failure a lesson?

> Eduardo Garcia
> Austin, Texas

I agree completely with your observations for the most part.  Now is not
the time to rewrite applications, particularly for medium to large
organizations.  Depending on the organization, small ones could pose a
problem.  As for client/server, enough time and activity has occurred in
this area to show that the success rate is generally low and the costs
are high.  Again, particularly in larger organizations.  From what I
have observed, it is more complicated to get the same results, less
scalable, less reliable, less secure and has not lived up to the hype.

Ever asked someone what client/server is?  I love the dumb looks when I
ask for a definition.  The look on their face is generally, "gees, it
just is because the magazine says so".  Sad thing is, even with Y2K
racing wildly towards us, many organizations are proceeding down this
path, nevertheless.  I'm witnessing it my own organization.  There's no
balance anymore--fitting the right tools and approach to the task.

In Utah, the current elected, political administration thinks the sun
rises on this technology and the Internet.  They believe it is the
answer to most of our educational, technical and business needs.  I've
almost come to the conclusion that a few of our elected officals are
either techno-wanna-be's or closet politicians.  At any rate, it could
distract or blurr the Y2K efforts.  As in any other state government,
Utah's primary functions and services are mainframe-based.
Unfortunately, the staff that maintains those systems don't exist in the
eyes of certain people in charge.  Client/Server and the Internet does,
however.  They have no concern about hiring people at the same rates of
pay with zero experience in the PC arena as they do for mainframe
staff.  I fully expect many of these people to be lured away to the
private sector in the near future, particularly the younger people with
no vested interests.

The $4 billion IRS fiasco is probably one of the best examples of why
client/server won't/can't work with enormous systems.  Sad thing with
IRS is that if they had spent the time to fix their systems instead of
wasting time on a project that was bound to fail, they might have fixed
their Y2K problem--and for a lot less than $4 billion.  I think an
important lesson learned here is that you can't undo/redo large systems
that took years to develop and stabalize.  This is obviously why
rewriting at this stage of the game is very risky.

Until companies rid themselves of the people in charge who promote
technology based on a whim with no professionally-paid experience to
back up their decisions, nothing will change.  In this case, compaines
deserve what they get from this.  All I know is that the information
systems are valuable and they are a lot of damn hard work to produce,
maintain and protect.  If this isn't enough, then I don't know what else
to say.  I believe a lot of Y2K is a result of this.  And now, there
aren't enough of us available to fix it.

Oops, it's time for a cup of coffee and a melatonin tablet.

Mike Dodas



Fri, 08 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.


Quote:
>With respect to the discussion of using VSAM and getting good >performance:
>Do most shops which use VSAM heavily write Assembler VSAM I/O modules?
>Should they?  Can this sort of routine be generalized, or must it be
>specific?  Do you know of any sources for more detailed information on
>this subject?

Since my experience is at the assembler level, I'll pass on the
question of what most shops do and whether it is prudent.  What I
would suggest is that you take a look at IBM's Batch LSR product, one
that allows many HLL applications to address the performance concern
that you stated.  Batch LSR allows you to substitute the use of LSR
pools and the deferred-write processing that they enable in many
applications whose frequency of direct updates makes the VSAM defaults
too expensive.  By default VSAM, as you observed, immediately follows
through on direct update requests by writing all updated control
intervals to DASD.  With deferred-write processing in effect, the
updated control intervals are buffered and may be updated numerous
times in virtual storage before they are actually written.  This may
make a dramatic difference in performance without an excessive penalty
in reliability if your system is fairly stable and the length of time
between writing and closing the VSAM data sets is limited.

My instinct would be to use Batch LSR rather than writing my own code
to exploit VSAM deferred-write processing unless it was clear that
update jobs needed to run a long enough time to place critical data
bases in jeopardy.  In that case, look at a set of simple VSAM I/O
routines that would periodically use the WRTBFR macro to force updated
buffers to be written.

Bob Wright - OS/390 Service Aids



Sat, 09 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:

>No, the real issue is not COBOL but the places that COBOL is used. IE,
>large mainframe shops. These places are designed to be as frustrating as
>possible to anybody that wants to get the work done.

Uh Terry, where did you get your information?  Rumor?

Yes I've been in some frustrating shops, both mainframe and c/s.  But
I've also been in some very well run shops, both mainframe and c/s.
It is not COBOL and mainframe that add up to frustration, it's bad
management.  Always has been, always will be.  Just wait until C++ and
c/s ages another 10 years.

Quote:
>Now I write Windows software, mainly in C++. I work mostly from home and
>try to keep  my customers at least 1000 miles away which cuts down but
>doesn't completely eliminate the 4-hour paper-clip control meetings.
>Right now I'm wearing my bath robe. I only wear my suit when somebody
>dies. I'm *much* happier.

Uh Terry, I write mostly mainframe software, mainly in COBOL.  I work
mostly from home and work on computers in three different states.  I
don't wear bath robes, but I have been known to forget to shave and
put shoes on.  And I'm very happy with what I'm doing.

Terry, it's not the language or the platform that sets the working
environment.

Tim Oxler

TEO Computer Technologies Inc.

http://www.i1.net/~troxler
http://users.aol.com/TEOcorp



Sat, 09 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

 >

 >
 > >No, the real issue is not COBOL but the places that COBOL is used.
IE,
 > >large mainframe shops. These places are designed to be as
frustrating as
 > >possible to anybody that wants to get the work done.
 >
 > Uh Terry, where did you get your information?  Rumor?
 >

No. I got from 15 years of doing it. My first mainframe job was in 1977,
my last was in 1992. I bet I could still work a card punch if there are
any left <G>.

 > Yes I've been in some frustrating shops, both mainframe and c/s.  But
 > I've also been in some very well run shops, both mainframe and c/s.
 > It is not COBOL and mainframe that add up to frustration, it's bad
 > management.  Always has been, always will be.  Just wait until C++
and
 > c/s ages another 10 years.
 >

I suspect that you are right - I can see it happening already. However,
in another 10 years I'll almost certainly be doing something different.

 > >Now I write Windows software, mainly in C++. I work mostly from home
and
 > >try to keep  my customers at least 1000 miles away which cuts down
but
 > >doesn't completely eliminate the 4-hour paper-clip control meetings.
 > >Right now I'm wearing my bath robe. I only wear my suit when
somebody
 > >dies. I'm *much* happier.
 >
 > Uh Terry, I write mostly mainframe software, mainly in COBOL.  I work
 > mostly from home and work on computers in three different states.  I
 > don't wear bath robes, but I have been known to forget to shave and
 > put shoes on.  And I'm very happy with what I'm doing.
 >  

I haven't owned a razor since I threw mine in the garbage can on Dec
22nd 1972. The reason I know the exact date is because that was my 17th
birthday and I cut myself shaving!

 > Terry, it's not the language or the platform that sets the working
 > environment.
 >  

Seriously though, you are right. I have worked in some good mainframe
shops and some lousy Windows shops but the overwhelming majority have
been the other way around. It would be a dull world if we all liked the
same things but I can only speak from my own experience and perspective.
That experience tells me that I would have to be in serious difficulties
before I ever took another mainframe job. Also, in context, we were
talking about Y2K here and any site that has left it this late is not
likely to be the sort of forward-thinking environment that I prefer to
work in.

 > Tim Oxler
 >
 > TEO Computer Technologies Inc.

 > http://www.i1.net/~troxler
 > http://users.aol.com/TEOcorp

Tim,

I hear what you are saying but the original question was something like
"Why won't ex-mainframers come back at *any* price?" I am an
ex-mainframer and was explaining why *I* won't come back at any price.
Others may have different reasons. At the end of the day we each have to
call it the way we see it.

Terry Richards
Terry Richards Software.



Sat, 09 Oct 1999 03:00:00 GMT  
 
 [ 165 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software