Y2K Weather Report # 8 - A Y2K spring. 
Author Message
 Y2K Weather Report # 8 - A Y2K spring.

Quote:


> >2) I would work for a spammer, given the right conditions.  What is your
> >correlation between spammers and lack of good opportunities for us?  I
> >can see none.  Regardless of their posting habits, if they had a
> >position that fits your needs better/equal to any other, why would you
> >not take it?  Seems rather unwise to take your stance/view on this to
> >me.

> How about a very slight change:

> "I would work for a con man, given the right conditions.  What is your
> correlation between con man and lack of good opportunities for us?  I
> can see none.  Regardless of their predatory habits, if they had a
> position that fits your needs better/equal to any other, why would you
> not take it?  Seems rather unwise to take your stance/view on this to
> me."

> Moral, spelled out for those who might be analogy-impaired:
>   Some of us do have a sufficient sense of morality and ethics such
> that there are some low-lifes we wouldn't work for regardless of
> the possible rewards.

> Ron (user RT can be emailed at microfocus.com)

ow does a spam posting possibly transcend to mean that a person is
without sense of morality or ethics? Excuse me for saying but that is a
REALLY REALLY big s-t-r-e-t-c-h. My 2 cents....

--
****************************************************************************

url    : http://www.*-*-*.com/ ~prgsdw
****************************************************************************



Sat, 09 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:

> 'non-mainframe/Cobol positions' there is a reason for that.

> If companies fix their legacy products they will get nothing in return
> and all the expenses will be lost. Many IT shops have been sold in the
> idea of moving to high technology in order to get something back, so
> they are implementing client/server to get rid of the mainframe. Its
> not a bad idea but is this the right time?

> This move will take longer than they think and some will end up with
> a system that doesn't work and a mainframe that is useless. Isn't the
> $4 billion IRS failure a lesson?

It should be.  It should be.

---
Frank Ney  WV/EMT-B VA/EMT-A  N4ZHG  LPWV  NRA(L) GOA CCRKBA JPFO
Sponsor, BATF Abuse page   http://www.*-*-*.com/ ~croaker/batfabus.html
West {*filter*}ia Coordinator, Libertarian Second Amendment Caucus
NOTICE: Flaming email received will be posted to the appropriate newsgroups
- --
"...I am opposed to all attempts to license or restrict the arming of
individuals...I consider such laws a violation of civil liberty,
subversive of democratic political institutions, and self-defeating
in their purpose."
        - Robert Heinlein, in a 1949 letter concerning "Red Planet"



Sat, 09 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:

> Finally, the question:
> Do most shops which use VSAM heavily write Assembler VSAM I/O modules?
> Should they?  Can this sort of routine be generalized, or must it be
> specific?  Do you know of any sources for more detailed information on
> this subject?

> TIA,
> Colin Campbell


In my shop we have a general-purpose assembler subprogram for accessing VSAM
files (KSDS, ESDS, or RRDS).  It has an internal table to manage
control blocks for something like 24 different VSAM files concurrently.

As far as I know, it was developed a long time ago for a DOS/VSE COBOL
compiler which only knew ISAM.  No one wants to do the work to convert it to
31-bit addressability.  Basically, you need to pass it a control record with
all the values needed to populate the appropriate MVS VSAM macros, and an
address of a record buffer.  I still use it when I need to POINT into the
middle of an ESDS file using a saved RBA, otherwise I prefer to use native
COBOL.

If your primary concern on VSAM performance is batch wall-clock run time, I
can tell you that we normally unload our VSAM files to QSAM for major
maintenance and then reload them for CICS.  Even for detail or ad hoc
reporting, we find it to be much quicker to process the data as a QSAM file,
provided your application is adaptable to sequential processing.  

If you want to write something like what I described, it's a fairly large
assembler program, at least 2000 lines.  You would need a good reference
manual on VSAM macros.  

I don't know if that helps, but good luck!

Arnold Trembley
Software Engineer I (just a job title, still a programmer)
MasterCard International
St. Louis, Missouri



Sun, 10 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.


Quote:


> > >No, the real issue is not COBOL but the places that COBOL is used.
> > >IE,
> > >large mainframe shops. These places are designed to be as
> > >frustrating as
> > >possible to anybody that wants to get the work done.

> > Uh Terry, where did you get your information?  Rumor?

>No. I got from 15 years of doing it. My first mainframe job was in 1977,
>my last was in 1992.
> > It is not COBOL and mainframe that add up to frustration, it's bad
> > management.  Always has been, always will be.  Just wait until C++
> > and c/s ages another 10 years.
> > >try to keep  my customers at least 1000 miles away which cuts down
> > >but doesn't completely eliminate the 4-hour paper-clip
> > >control meetings.

>I hear what you are saying but the original question was something like
>"Why won't ex-mainframers come back at *any* price?" I am an
>ex-mainframer and was explaining why *I* won't come back at any price.

There it is, solid reasons why some big-iron crankers are not
available to work on Y2K COBOL at any price.  My impression from
talking to my pals and the local Wizards around Washington DC, is
that only 20% of the real pro's will work on Y2K.  A lot of
nubies, kids, and baby-coders are willing but for every one or two
of them, I'd want a battle hardened vet to ride herd, jerk their
leash.

Reaching into a complex, mission critical production system is not
like giving a bunch of recruits an M-16A1 and sending them into
combat led by a one sarg with a loud voice.

It's like putting them at the controls of a F15 Eagle, alarms
going off, a maze of switches, indicators, displays, all flashing
and beeping at once.  Except there are more ways to{*filter*}up a
CICS transaction than an F15 in arial combat.  There's more power
at your fingertips with a 10 engine 9021.

Odds are that a nubie at the controls of an F15 will fall or be
knocked out of the sky.  At the controls of a mission critical
application,  the corporation goes out of business and thousands
lose.

Cory Hamasaki       http://www.*-*-*.com/
HHResearch Co.     OS/2 Webstore & Newsletter
REDWOOD        



Sun, 10 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:

>With respect to the discussion of using VSAM and getting good
>performance:

>I would like to ask a general question, since some parts of our
>application inventory use VSAM, and we may have to touch them for Y2K
>repair purposes, or even take over their long term maintenance.

>I have little experience with VSAM, except that I tried it out from
>COBOL a few years ago.  We were trying to figure out whether to use an
>IMS D/B, DB2, VSAM, or Sequential Data Sets for storing a LARGE amount
>of financial data.  We eventually settled on QSAM Variable length
>records, and time has appeared to show this was the correct (low-cost)
>choice.

>My "study" of VSAM involved observing EXCP counts (which was how we
>billed) when processing file actions, especially updates and insertions,
>since the application would have :
>1. inception to date records which would get updated monthly,
>2. month to date records which would get created at the start of the
>month, then updated weekly (last month's records would be dropped at the
>start of a new month), and
>3. weekly records, which would be inserted each week, while the previous
>week's records were deleted.

>This study did not take into account how the data might be used outside
>of the "master file update" process.  However, over time, it appears
>that
>1. the full set of data is used for reporting purposes, and
>2. selected subsets of the data are used for ad-hoc purposes.

>Using COBOL I/O statements resulted in huge EXCP counts, indicating that
>blocks (CI's?) were being written and reread for every insert.  My
>reading told me I could write in Assembler and avoid such behavior, and
>reduce I/O (EXCP counts) by as much as 90%+.  (I did not code any
>routines to prove this, however.)

>Finally, the question:
>Do most shops which use VSAM heavily write Assembler VSAM I/O modules?

In the shops I've been in, not most.

Quote:
>Should they?

That may depend on such things as:
1. size and speed of the computer.
2. Online? or just batch?
3. If online, is there a high number of users accessing same modules
at the same time.

I have run into Assembler programs written in strategic locations for
CICS to add speed to execution.

Quote:
> Can this sort of routine be generalized, or must it be
>specific?  Do you know of any sources for more detailed information on
>this subject?

Colin what you might need is just to tune your VSAM files.  Tuning
VSAM is like tuning an old sports car.  The first things that must be
known are:

1. Record length
2. Is the file KSDS, ESDS, RRDS
3. Is this for online, batch or both
4. type of disk pack

And based off of these items, there is formula to find optimum CI
size.  I have the formula, but I'm currently unable to find it.
However, most calculations result in an optimum value of 4096, and
many times 8192 will tie 4096 as the optimum value.  When this occurs,
and the file is only for batch, use a CI size of 8192, otherwise use
4096.

One shop I've worked at ran the online at 4092, and reorg'd the file
to 8192 before batch processing.

For online, use the IMBED option.

If disk space is not a problem, use the REPLICATE option

Next is Freespace.  Does your file have ample free space?  This is
reserve space to accommodate record insertions.  Without it, will
cause CI/CA splits, which will cause performance to suffer.

Run a LISTCAT after execution.  A rapid increase in the number of CI
splits indicates that the amount of free space is insufficient.

Also, redefine/reorg your files on some kind of regular basis to keep
CI/CA splits low.

Next is your JCL.  All of your DD's for your VSAM files should have
AMP= parameters for bufferspace of data and index.  

Some this info might be dated.  I used to be pretty good at this 5-6
years ago, but haven't done it recently, MVS has gotten better in the
meantime.

I used to rely on the IBM Wasington Systems Center Technical
Bulletins.  I'm not sure if they are still in circulation.

A fairly decent book is "Practical VSAM for Today's Programmers" by
Janossy and Guzik.  That book shouldn't be difficult to find.

I hope that helps!

Tim Oxler
TEO Computer Technologies Inc.

http://www.i1.net/~troxler
http://users.aol.com/TEOcorp



Sun, 10 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

        If you're system is MVS/ESA or higher might take a look at Batch LSR.
 It can dramatically decrease wall clock time for certain kinds of processing
with just a jcl change.  It require a subsystem entry so you'll have to get
tech support involved.  IBM manual is (at MVS 4.3) MVS/ESA Batch Local Shared
Resources Subsystem - GC28-1672-01


Quote:

>With respect to the discussion of using VSAM and getting good
>performance:

>I would like to ask a general question, since some parts of our
>application inventory use VSAM, and we may have to touch them for Y2K
>repair purposes, or even take over their long term maintenance.

>I have little experience with VSAM, except that I tried it out from
>COBOL a few years ago.  We were trying to figure out whether to use an
>IMS D/B, DB2, VSAM, or Sequential Data Sets for storing a LARGE amount
>of financial data.  We eventually settled on QSAM Variable length
>records, and time has appeared to show this was the correct (low-cost)
>choice.

>My "study" of VSAM involved observing EXCP counts (which was how we
>billed) when processing file actions, especially updates and insertions,
>since the application would have :
>1. inception to date records which would get updated monthly,
>2. month to date records which would get created at the start of the
>month, then updated weekly (last month's records would be dropped at the
>start of a new month), and
>3. weekly records, which would be inserted each week, while the previous
>week's records were deleted.

>This study did not take into account how the data might be used outside
>of the "master file update" process.  However, over time, it appears
>that
>1. the full set of data is used for reporting purposes, and
>2. selected subsets of the data are used for ad-hoc purposes.

>Using COBOL I/O statements resulted in huge EXCP counts, indicating that
>blocks (CI's?) were being written and reread for every insert.  My
>reading told me I could write in Assembler and avoid such behavior, and
>reduce I/O (EXCP counts) by as much as 90%+.  (I did not code any
>routines to prove this, however.)

>Finally, the question:
>Do most shops which use VSAM heavily write Assembler VSAM I/O modules?
>Should they?  Can this sort of routine be generalized, or must it be
>specific?  Do you know of any sources for more detailed information on
>this subject?

>TIA,
>Colin Campbell




Sun, 10 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

In the testing that I  was talking about, the only variable was the type of
access.  I also was involved in tuning exercises with batch jobs using  
programs from this vendor that did all VSAM I-O in called COBOL    
sub-routines with ACCESS DYNAMIC.  I had cases of dramatic improvements by    
using BLSR (reduction from 1 I-O for every 3 reads where there were over    
700,000 reads to 12 I-O's in one extreme instance) and in some cases by    
removing BLSR and doing appropriate buffering.  Obviously working on    
application logic in some of these cases could get even more savings but the    
additional time saved wasn't worth the cost and risk.  

Also I just had a case where I was doing a quick and dirty fix up of a test    
file in COBOL.  Since I was reading every record I had ACCESS SEQUENTIAL.    
The fixup consisted of correcting record lengths on variable length records    
(record length didn't agree with transaction code) and deleting records    
without headers.  Since the fix record length check was done before the    
missing header check a REWRITE could be followed by a DELETE.  This bombs    
with ACCESS SEQUENTIAL but works fine with ACCESS DYNAMIC.  Since this was a    
quick and done type run I changed to ACCESS DYNAMIC.  If this were to be an    
ongoing production run I probably would have taken the time do it so that a    
record was either rewritten or deleted but not both.  

Quote:

> In a message dated 04-14-97, Clark Morris said to All about Re: Brief Y2k  
> Weather Re  

> CM>ACCESS DYNAMIC is used in a lot of I-O sub-programs that  
> CM>are vendor related.  

> I've seen such vendor-supplied code. One of my former employers made a lot  
> of its money from making such products run 90% faster.  

> CM>This way one routine can be called to handle all of the functions.  
> CM>Current   IBM MVS implementation of ACCESS DYNAMIC gives tolerably good  
> CM>performance   based on various tests that I ran.  Neither the all random  
> CM>nor the all   sequential access was that degraded over the use of the  
> CM>specific access   although a 10-20 percent CPU savings was obtainable in  
> CM>(only) some   situations.  

> 10%-20% savings compared to what? Did you use BLSR or some other buffering  
> tool?  

> > Thus, we have: ACCESS is always DYNAMIC;  
> >                I/O always uses sequential statements;  
> >                I/O is always functionally random.  

> At the risk of being on-topic, a now unknown phenomenon in this newsgroup,  
> we should consider what the above does at the assembler level.  

> The ACCESS MODE DYNAMIC tells the VSAM open routine (inside SVC 19) to load  
> support modules for both direct and sequential RPL (Request Parameter List)  
> strings.  

> The START statement performs a _sequential_ POINT macro, which cause a  
> full-string read. The READ NEXT then does a sequential GET macro to pull    
the  
> desired record from the string's buffer pool. Thus, a full buffer pool of  
> data Control Intervals (and possibly several index CI's) was read by the  
> POINT macro. [Remember that VSAM allocates buffers at the RPL level, not    
the  
> ACB level.]  

> If the programmer had coded a READ KEY statement instead, a _direct_ GET  
> macro would have read a single data CI (possibly several index CI's,    
though)  
> and pulled the record from there, all in a single access method operation.  
> This will usually be much faster than reading a full string of data CI  
> buffers.  

> The use of ACCESS MODE RANDOM would prevent the redundant loading of the  
> sequential support modules. [But then, RAM is cheaper than thought these  
> days.]  

> Bear in mind, that I am writing about the all too common occurrence of this  
> being the only I/O performed on the file.  

> We could also address the issue of NSR and LSR buffer management,    
especially  
> since this is an assembler newsgroup. Is anybody interested?  

> Regards  

> Dave  
> <Team PL/I>  
> ___  
>  * MR/2 2.25 #353 * Don't let school interfere with your education.  
> --  
> Please remove the '$' in the from line before reply via email.  
> Anti-UCE filter in operation.  

Clark F. Morris, Jr.  
CFM Technical Programming Services  
Bridgetown, Nova Scotia, Canada  




Sun, 10 Oct 1999 03:00:00 GMT  
 Y2K Weather Report # 8 - A Y2K spring.

Quote:
> There it is, solid reasons why some big-iron crankers are not
> available to work on Y2K COBOL at any price.  My impression from
> talking to my pals and the local Wizards around Washington DC, is
> that only 20% of the real pro's will work on Y2K.  A lot of
> nubies, kids, and baby-coders are willing but for every one or two
> of them, I'd want a battle hardened vet to ride herd, jerk their
> leash.

Cory,

Because my current company is doing nothing about Y2K, I am checking out
the job market.  One of my criteria for considering a position is that it
is NOT Y2K work.  I want to work for a company that already has this fixed.
 So far - based on this single criteria - I have found NO company that
qualifies.  It's BAD BAD BAD out there.  I am afraid when these companies
can't find enough programmers to work on Y2K, they will start offering
"Perm" positions, and then rightsize after the fix is complete.  That is
what I want to avoid in any new employ.



Sun, 10 Oct 1999 03:00:00 GMT  
 
 [ 165 post ]  Go to page: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

 Relevant Pages 
 

 
Powered by phpBB® Forum Software