Data corruption from failed rlock() 
Author Message
 Data corruption from failed rlock()

A recent poster passed a comment (almost as an aside) to the effect that a
failed record lock could (or would) cause data corruptions even though the
record lock was retried until neterror() returned .F.
Before moving on to update the next record, a commit or a skip 0 command
must be issued to prevent this.
Would someone like to confirm this or at least open up a discussion?
TIA
Denis


Sun, 19 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()
Hello.
Quote:

> A recent poster passed a comment (almost as an aside) to the effect that a
> failed record lock could (or would) cause data corruptions even though the
> record lock was retried until neterror() returned .F.
> Before moving on to update the next record, a commit or a skip 0 command
> must be issued to prevent this.
> Would someone like to confirm this or at least open up a discussion?

On Btrieve RDD it is true. If RLock() failed, a few bytes at the very
beginning
of the record become trashy - NOT in file, but in Clipper workarea
buffer.
Skip 0 cures this in 100% of cases.
--



Sun, 19 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()

Quote:

>Hello.

>> A recent poster passed a comment (almost as an aside) to the effect that
a
>> failed record lock could (or would) cause data corruptions even though
the
>> record lock was retried until neterror() returned .F.
>> Before moving on to update the next record, a commit or a skip 0 command
>> must be issued to prevent this.
>> Would someone like to confirm this or at least open up a discussion?
>On Btrieve RDD it is true. If RLock() failed, a few bytes at the very
>beginning
>of the record become trashy - NOT in file, but in Clipper workarea
>buffer.
>Skip 0 cures this in 100% of cases.
>--

Vladimir is right (I posted the original comment).

It's fairly simple:

When Clipper accesses ("reads")  a record, the contents are placed in a
buffer in memory. This gives rise to the following scenario:

1. User A reads a record and locks it.

2 User B reads the same record.

3. User B attempts to lock the record but fails.

4. User A updates the data and writes the (changed) record.

5. User B attempts to lock the record and succeeds, but does not issue a
SKIP 0.

     The data User B is working on is the data existing before User A
updated it.

6. User B updates the data and writes the record, effectively overwriting
changes made by User A.

Hence the need to issue SKIP 0 to refresh the buffer after a lock failure.
Nantucket told me this many moons ago (just how many will be clear from that
statement) but of course it's not in the manual.

In a complex networked system involving multiple tables, the use of COMMIT
statements after a series of REPLACE statements is also advised, as this
forces a disk write, thus ensuring other users process the latest data.

The above is my understanding, I stand to be corrected if it is not 100%
correct. However it has worked successfully for me over a period of 12
years. I have mostly used S'87 but we have one system in 5.2 and I believe
that exactly the same applies.

Colin



Mon, 20 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()
Surely Collin's concept is flawed:-
The correct sequence would be to lock the record BEFORE reading if the
intention is to update.
This would ensure that only the latest data is subject to a
read/modify/write sequence.
If this rule is followed, does the potential corruption still exist without
a skip 0?
Denis

Quote:


>>Hello.

>>> A recent poster passed a comment (almost as an aside) to the effect that
>a
>>> failed record lock could (or would) cause data corruptions even though
>the
>>> record lock was retried until neterror() returned .F.
>>> Before moving on to update the next record, a commit or a skip 0 command
>>> must be issued to prevent this.
>>> Would someone like to confirm this or at least open up a discussion?
>>On Btrieve RDD it is true. If RLock() failed, a few bytes at the very
>>beginning
>>of the record become trashy - NOT in file, but in Clipper workarea
>>buffer.
>>Skip 0 cures this in 100% of cases.
>>--

>Vladimir is right (I posted the original comment).

>It's fairly simple:

>When Clipper accesses ("reads")  a record, the contents are placed in a
>buffer in memory. This gives rise to the following scenario:

>1. User A reads a record and locks it.

>2 User B reads the same record.

>3. User B attempts to lock the record but fails.

>4. User A updates the data and writes the (changed) record.

>5. User B attempts to lock the record and succeeds, but does not issue a
>SKIP 0.

>     The data User B is working on is the data existing before User A
>updated it.

>6. User B updates the data and writes the record, effectively overwriting
>changes made by User A.

>Hence the need to issue SKIP 0 to refresh the buffer after a lock failure.
>Nantucket told me this many moons ago (just how many will be clear from
that
>statement) but of course it's not in the manual.

>In a complex networked system involving multiple tables, the use of COMMIT
>statements after a series of REPLACE statements is also advised, as this
>forces a disk write, thus ensuring other users process the latest data.

>The above is my understanding, I stand to be corrected if it is not 100%
>correct. However it has worked successfully for me over a period of 12
>years. I have mostly used S'87 but we have one system in 5.2 and I believe
>that exactly the same applies.

>Colin



Wed, 22 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()

Quote:

>Surely Collin's concept is flawed:-
>The correct sequence would be to lock the record BEFORE reading if the
>intention is to update.
>This would ensure that only the latest data is subject to a
>read/modify/write sequence.
>If this rule is followed, does the potential corruption still exist without
>a skip 0?
>Denis


>>1. User A reads a record and locks it.

>>2 User B reads the same record.

Perhaps I expressed this badly.

"read" ing a record means moving the pointer to it by means of skip, seek,
etc.

Thus, the contents of the record are in the buffer before you have a chance
to lock it. Hence the contents of the record on disk may change as
described.

I should have added: it is easy to write a small program to test this. You
could use Clipper's NETWORK.EXE to help test this. When/if I have time I
will do this later this week.

Colin



Fri, 24 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()
Just two questions:
1) Normally, a dbSkip() unlocks the previous record. Does a dbSkip(0)
unlock
the record?
2) Which would be the correct sequence? Repeat the rlock until getting
it
and then dbSkip(0), or dbSkip(0) every time you fail at rlock? Coul you
put
an example of your standard procedure to lock a record?
Thanks.



Quote:
> Surely Collin's concept is flawed:-
> The correct sequence would be to lock the record BEFORE reading if the
> intention is to update.
> This would ensure that only the latest data is subject to a
> read/modify/write sequence.
> If this rule is followed, does the potential corruption still exist
without
> a skip 0?
> Denis



> >>Hello.

> >>> A recent poster passed a comment (almost as an aside) to the
effect that
> >a
> >>> failed record lock could (or would) cause data corruptions even
though
> >the
> >>> record lock was retried until neterror() returned .F.
> >>> Before moving on to update the next record, a commit or a skip 0
command
> >>> must be issued to prevent this.
> >>> Would someone like to confirm this or at least open up a
discussion?
> >>On Btrieve RDD it is true. If RLock() failed, a few bytes at the
very
> >>beginning
> >>of the record become trashy - NOT in file, but in Clipper workarea
> >>buffer.
> >>Skip 0 cures this in 100% of cases.
> >>--

> >Vladimir is right (I posted the original comment).

> >It's fairly simple:

> >When Clipper accesses ("reads")  a record, the contents are placed
in a
> >buffer in memory. This gives rise to the following scenario:

> >1. User A reads a record and locks it.

> >2 User B reads the same record.

> >3. User B attempts to lock the record but fails.

> >4. User A updates the data and writes the (changed) record.

> >5. User B attempts to lock the record and succeeds, but does not
issue a
> >SKIP 0.

> >     The data User B is working on is the data existing before User A
> >updated it.

> >6. User B updates the data and writes the record, effectively
overwriting
> >changes made by User A.

> >Hence the need to issue SKIP 0 to refresh the buffer after a lock
failure.
> >Nantucket told me this many moons ago (just how many will be clear
from
> that
> >statement) but of course it's not in the manual.

> >In a complex networked system involving multiple tables, the use of
COMMIT
> >statements after a series of REPLACE statements is also advised, as
this
> >forces a disk write, thus ensuring other users process the latest
data.

> >The above is my understanding, I stand to be corrected if it is not
100%
> >correct. However it has worked successfully for me over a period of
12
> >years. I have mostly used S'87 but we have one system in 5.2 and I
believe
> >that exactly the same applies.

> >Colin

Sent via Deja.com http://www.deja.com/
Before you buy.


Fri, 24 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()

Quote:

>Just two questions:
>1) Normally, a dbSkip() unlocks the previous record. Does a dbSkip(0)
>unlock the record?

dbSkip() doesn't unlock record lockings.

--
Bambang P
http://members.xoom.com/bpranoto
http://bpranoto.tripod.com
* Hot Clipper utilities:
   MAKFORCE (152K Zip):
    No more C3048 error with this Make Engine
   CLEAROBJ (18K Zip):
    Hunts and kills Clipper OBJ compiled with /B
* Post to newsgroup via e-mail ? Read Newsgroup tips and
  tricks



Sat, 25 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()
I still stick to my guns on this, i.e. you must issue a skip 0 after a
failed rlock().

Denis Murdoch has asked me to produce a test program to demonstrate this. I
will do this later in the week.

Colin



Sat, 25 May 2002 03:00:00 GMT  
 Data corruption from failed rlock()
Just two questions:
1) Normally, a dbSkip() unlocks the previous record. Does a dbSkip(0) unlock
the record?
2) Which would be the correct sequence? Repeat the rlock until getting it
and then dbSkip(0), or dbSkip(0) every time you fail at rlock? Coul you put
an example of your standard procedure to lock a record?
Thanks.


Sat, 25 May 2002 03:00:00 GMT  
 
 [ 9 post ] 

 Relevant Pages 

1. Data File Corruption Question

2. Possible Data Corruption?

3. Anti-Virus Software and Data corruption!

4. Possible Data Corruption - File Access Error

5. cpd 2.1 data corruption

6. Topspeed Data Corruption

7. TopSpeed Data Corruption

8. Data File corruption in CFD 2.1

9. Data Corruption in Clipper & Windows 2000

10. Error 530 (Possible Data Corruption in file attempting to access a record)

11. Possible data corruption in file(706)

12. Clarion Data File Corruption

 

 
Powered by phpBB® Forum Software