The horrible truth about the Verilog standard 
Author Message
 The horrible truth about the Verilog standard

Hi - I'd like to share my latest horror story with you.

Back in 1990-1991, I used Verilog at Alcatel. There was only
one Verilog simulator (Verilog XL) with no competitors in
sight. Nonblocking assignments were not part of the
language. We therefore designed all RTL logic (combinatorial
and sequential) using blocking assignments - not because we
liked them so much, but because there was no choice.

Between RTL modules running on the same clock, there was a
sound, deterministic concurrency model. This was supported
perfectly by Synopsys Verilog/Design Compiler - making the
magic of RTL synthesis work.

Since 1992, I have been using VHDL for RTL design, but now
in 2000 I'm briefly back to Verilog. I have learned to be
conservative, and so I decided to use the tried and tested
methodology I knew from 10 years ago. I knew that Verilog's
zero-delay mechanisms were quite weak compared to VHDL and
didn't want to take any risk. In that way, I could be sure
nothing bad was going to happen.

Or so I thought.

Through discussions in another thread (see blocking /
nonblocking), a horrible truth gradually became apparent
to me: Verilog's zero-delay handling had actually been
*relaxed* instead of strengthened by the Verilog
standardization process. The old model of things (using
blocking assignments for all RTL logic) is not guaranteed to
work! At least that's what many people who seem to know the
standard well are asserting - I'm still finding it hard to
believe.

By doing this, the Verilog standard has violated a basic
principle in language design: backwards compatibility. Good
heaven, what can be more crucial in an HDL than its
concurrency model?

I am astonished that the Verilog standard has been able to
get away with this. My guess is that many people are not
aware of it - especially those that still use "good"
simulators (see further) and don't read standards all the
time.

What can be done? My position is that bad standards should be
ignored until they are fixed. I believe that Verilog designers
should switch back to the "gold" standard, that is, Verilog XL
compatibility.

I have been assured that NC Verilog is compatible with
Verilog XL. It uses a mixed language simulation kernel,
so scheduling semantics match VHDL semantics. We are
using Modelsim Verilog, and it seems to be fine also.
(I'd be grateful if someone could confirm this.)

It would be really useful to me and I hope to others to find
out which simulators are compatible with the "gold" standard.
and which are not.

Regards, Jan

--
Jan Decaluwe           Easics              
Design Manager         System-on-Chip design services  
+32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium



Fri, 28 Feb 2003 19:49:37 GMT  
 The horrible truth about the Verilog standard

Quote:

> Back in 1990-1991, I used Verilog at Alcatel. [...]
> Since 1992, I have been using VHDL for RTL design, but now
> in 2000 I'm briefly back to Verilog.

Geee! Jan, you were almost 10 years out of Verilog business
and are complaining that things have changed since then?
C'mon, this is EDA!

Quote:
> By doing this, the Verilog standard has violated a basic
> principle in language design: backwards compatibility.

No, the standard has defined an overall accepted semantics,
and removed the proprietary quasi-standard semantics of
a simulator monopol at the start of Verilog (XL from CAD).

Quote:
> What can be done? My position is that bad standards should be
> ignored until they are fixed. I believe that Verilog designers
> should switch back to the "gold" standard, that is, Verilog XL
> compatibility.

Serious? "Naa, we don't want a puplic standard, we want a
single company to control the language." You're kidding, right?

Is it really that difficult to drop old habbits and adopt the
new style? I don't think so, since most have ;-)

Lars
--
Address:  University of Mannheim; B6, 26; 68159 Mannheim, Germany
Tel:      +(49) 621 181-2716, Fax: -2713

Homepage: http://mufasa.informatik.uni-mannheim.de/lsra/persons/lars/



Mon, 03 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:
> Through discussions in another thread (see blocking /
> nonblocking), a horrible truth gradually became apparent
> to me: Verilog's zero-delay handling had actually been
> *relaxed* instead of strengthened by the Verilog
> standardization process. The old model of things (using
> blocking assignments for all RTL logic) is not guaranteed to
> work!

Perhaps you can say specifically what you think changed? A non-blocking
assignment within a thread is pretty clear-cut and I would be curious to
know how they confound old designs.
--
Steve Williams                "The woods are lovely, dark and deep.


http://www.picturel.com       And lines to code before I sleep."


Thu, 06 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:


> > Back in 1990-1991, I used Verilog at Alcatel. [...]
> > Since 1992, I have been using VHDL for RTL design, but now
> > in 2000 I'm briefly back to Verilog.

> Geee! Jan, you were almost 10 years out of Verilog business
> and are complaining that things have changed since then?
> C'mon, this is EDA!

I believe I'm sufficiently long in this business to tell the
difference between changes (very common) and progress (rare).
In this case, I'm seeing decline. Of course I complain.

Quote:
> > By doing this, the Verilog standard has violated a basic
> > principle in language design: backwards compatibility.

> No, the standard has defined an overall accepted semantics,
> and removed the proprietary quasi-standard semantics of
> a simulator monopol at the start of Verilog (XL from CAD).

> > What can be done? My position is that bad standards should be
> > ignored until they are fixed. I believe that Verilog designers
> > should switch back to the "gold" standard, that is, Verilog XL
> > compatibility.

> Serious? "Naa, we don't want a puplic standard, we want a
> single company to control the language." You're kidding, right?

A flawed standard is worse than anything else. Backwards
compatibility is key - otherwise no progress is possible.

Quote:
> Is it really that difficult to drop old habbits

Is it difficult to learn to represent years by 4 digits instead
of 2? Of course not. That's not the issue.

The issue is legacy code. Companies with major investments
in Verilog code have reasons to be very worried.
Their existing designs might stop working (in simulation)
if they switch to a new simulator that takes the standard
literally.

Quote:
> and adopt the
> new style? I don't think so, since most have ;-)

What is the new style? If you mean Cliff Cummings' guidelines,
you can't use blocking assignments for sequential logic
anymore. That's not progress.

Regards, Jan

--
Jan Decaluwe           Easics              
Design Manager         System-on-Chip design services  
+32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium



Fri, 07 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:

> > Through discussions in another thread (see blocking /
> > nonblocking), a horrible truth gradually became apparent
> > to me: Verilog's zero-delay handling had actually been
> > *relaxed* instead of strengthened by the Verilog
> > standardization process. The old model of things (using
> > blocking assignments for all RTL logic) is not guaranteed to
> > work!

> Perhaps you can say specifically what you think changed? A non-blocking
> assignment within a thread is pretty clear-cut and I would be curious to
> know how they confound old designs.

The problem is not with non-blocking assignment, but with blocking
assignment - that used to be the only kind available.

Verilog has always had areas of indeterministic behavior. However,
communication between modules was deterministic. In particular,
when modules running on the same clock were communicating through
ports driven by blocking assignments (without delay specification),
behavior was deterministic and race-free. In other words,
there was no difference between blocking and nonblocking assignments
as far as inter-module communication is concerned.

The standard however doesn't require that blocking assignments
still work like that. A design (using blocking assignments) that
used to work fine can exhibit races and indeterminism on a different
simulator that would still be compliant with the standard.

In other words, legacy code is not guaranteed to work as
before by the standard.

Regards, Jan

--
Jan Decaluwe           Easics              
Design Manager         System-on-Chip design services  
+32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium



Fri, 07 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:


> > > Through discussions in another thread (see blocking /
> > > nonblocking), a horrible truth gradually became apparent
> > > to me: Verilog's zero-delay handling had actually been
> > > *relaxed* instead of strengthened by the Verilog
> > > standardization process. The old model of things (using
> > > blocking assignments for all RTL logic) is not guaranteed to
> > > work!

> > Perhaps you can say specifically what you think changed? ...

> The problem is not with non-blocking assignment, but with blocking
> assignment - that used to be the only kind available.

> Verilog has always had areas of indeterministic behavior....

> In other words, legacy code is not guaranteed to work as
> before by the standard.

The problem is really that zero-delay things don't really exist
in the world of hardware, and that Verilog was originally designed
to verify hardware.

Simulators will always generate non-deterministic (simulator to
simulator) output for zero-delays as the LRM does not sufficiently
define scheduling algorithms. Multi-thread parallel processing
simulator kernels produce different results run to run with zero
delay events (and non zero delay events in the same slot) - one
reason why there aren't many such beasts.

The only way to fix the problem is to sacrifice performance in
scheduling and use a stricter algorithm.

[There are worse problems with the Verilog standard that I'd fix
first :-) ]

Kev.

--

http://www-galaxy.nsc.com/~dkc/



Fri, 07 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:

> The problem is really that zero-delay things don't really exist
> in the world of hardware, and that Verilog was originally designed
> to verify hardware.

> Simulators will always generate non-deterministic (simulator to
> simulator) output for zero-delays as the LRM does not sufficiently
> define scheduling algorithms.  

actually I'd beg to disagree - 0-delay-like non-deterministic
things happen in the real world all the time - they're just not
very nice to deal with so we avoid them like the plague.

Of course what I'm talking about is what happens when we miss
setup/hold times in our (common) synchronous design methodologies
(the brave people doing async self-timed stuff of course
relish this :-).

I think there's a direct parallel between the verilog 0-time
race issues we've been discussing the past week or so and
the normal issues of synchronous design.

The thing that makes all this hard to understand without a
detailed knowledge of the underlying simulator implementation
is that in Verilog we're talking about setup and hold windows
that are 0 time units wide and coupled with clk->Q times of
0 units - you can see why we're having problems.

Now we use non-blocking transactions to create a
still-0-but-slightly-larger clk->Q time in order to meet
the hold times in our simulated designs. Of course that's
even more confusing to a beginner.

I'm convinced the thing that makes it most confusing is that
starting out it's relatively easy to get something that will
work most of the time without trying hard, or really understanding
what's going on underneath ..... then you get sloppy
about which assignment you use where and you get bitten.

Look at a simulation on a waves display and you can't tell
whether something gets sampled or not - on a real-world 'scope
you can tell if you're meeting setup.

I'm actually personally in favor of using unit-delays so I can
look at the waves and actually see what was sampled when.

However as I've said before there are a number of verilog
timing methodologies that work - choose one that works for you,
and get to understand it's strengths and weaknesses

        Paul Campbell



Fri, 07 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:

> I'm sorry, I'm slow and I need help understanding this problem. I think
> you are saying that yes, this is and always was a race:

>     module foo;
>         reg a, b;


>     endmodule

> but if the two always blocks are placed in different modules it was not
> a race, because the assignment acted sorta like what we now call non-
> blocking assignments?

Yes, that's what I am saying.

--
Jan Decaluwe           Easics              
Design Manager         System-on-Chip design services  
+32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium



Sat, 08 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:

> Actually, the Verilog-XL Reference Manual never guaranteed that such a model will work.

...

Quote:
> It may have worked for Jan, but the Reference Manual never guaranteed it.

pulls out the black'n'blue manual "Gateway Verilog Version 1.2 March 1987" .... whoooph
blow all the dust off it .... cough cough .... an interesting manual ... almost
all of the pages are actually marked "1.1a" excpet for a few marker "1.0a".
It's printed in a tacky fized-size font that makes it look like an old
type-written manuscript - by modern standards it's reall hard to read
(my guess is it was NROFF'd :-)

This was the manual I learned Verilog from - and my only documentation
until the IEEE standard was released .... I used to know this book like
the back of my hand :-)

Looking through it there is no mention of behavioural event ordering - it doesn't
define a particular one or caution about coding practices for managing them
(at least to my cursory skim ... there's nothing about the event queues
descending into modules etc etc although that may well have been what they did)

There is however the interesting exception of chapter 18 which
describes the wonders of "accelerated events" - what the 'X' in XL is for ....
here it cautions:

        "the accelerated algorithms can process events in a different order from
        the normal algorithm ..."

        "because the order of simultaneous events can be processed differently
        it is possible for zero delay oscilations to occur ..."

these are of course ways of saying "we monkey with the event ordering for
performance reasons so sometimes things will work differently from
what you expect".

Anyway I'd take this as evidence that arbitrary event ordering (at least
in some circumstances) has been a part of the Verilog landscape from
almost the beginning.

The real problem of course is that this sort of thing is hard to explain,
I think it's easy to keep it out of a manual, or even to not realize
how important it is untill people have used something like Verilog for a
long period of time.

Even then different people come away with very different views
of the world - I know from day one I've coded with no assumptions
of event ordering (but then I never went to a Gateway/Cadence
training course - just read the above manual and wrote code) - Jan on the
other hand picked up a different world view ... it doesn't necesarily make
either right or wrong at the time .... however I suspect that 'reality' has
shifted over time - simulations that depended on those undocumented
event ordering assumptions (I'm assuming here that they didn't
appear in an intervening document between my gateway manual and the LRM)
started to break as the assumptions broke down (and is
does rather explain why I didn't suffer any problems
switching to VCS while other people did .... maybe it's
because I didn't go to any of those training classes :-)

        Paul



Sat, 08 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard
Hasn't your doctor ever given you the advice "if it hurts when you do
that, don't do that".
Using blocking assignments for anything except pure combinatorial logic
is bad practice and should be avoided. You should never rely on a side
effect that isn't in a standard, side effects change from release to
release of the same tool, and certainly can't be counted on to be
consistant from vendor to vendor. If the orginal VerilogXL lacked non
blocking assignments, that was a bug which has been corrected now.
Quote:


> > I'm sorry, I'm slow and I need help understanding this problem. I think
> > you are saying that yes, this is and always was a race:

> >     module foo;
> >         reg a, b;


> >     endmodule

> > but if the two always blocks are placed in different modules it was not
> > a race, because the assignment acted sorta like what we now call non-
> > blocking assignments?

> Yes, that's what I am saying.

> --
> Jan Decaluwe           Easics
> Design Manager         System-on-Chip design services
> +32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium




Sat, 08 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard

Quote:

> The real problem of course is that this sort of thing is hard to explain,
> I think it's easy to keep it out of a manual, or even to not realize
> how important it is untill people have used something like Verilog for a
> long period of time.

> Even then different people come away with very different views
> of the world - I know from day one I've coded with no assumptions
> of event ordering (but then I never went to a Gateway/Cadence
> training course - just read the above manual and wrote code) - Jan on the
> other hand picked up a different world view ... it doesn't necesarily make
> either right or wrong at the time .... however I suspect that 'reality' has
> shifted over time - simulations that depended on those undocumented
> event ordering assumptions (I'm assuming here that they didn't
> appear in an intervening document between my gateway manual and the LRM)
> started to break as the assumptions broke down (and is
> does rather explain why I didn't suffer any problems
> switching to VCS while other people did .... maybe it's
> because I didn't go to any of those training classes :-)

As we are reconstructing mental processes here, I'd like to offer you
mine to explain where my model comes from.

I learned my first Verilog from Synopsys' Verilog (or HDL) Compiler
manual - all very pragmatic and with a strong emphasis on synthesis.
I believe that register inference was introduced somewhere
early 1990 (version 1.3a?) and it was explained using blocking
assignments. I don't know when nonblocking assignments started to be
supported by Synopsys synthesis but it must have been much later.

I quickly noticed that you could have race conditions in simulation
between always blocks running on the same clock in the same module.
This of course caused a little uneasiness - how could the examples
in the manuals then work? But I realized that regs were really like
shared variables and therefore not really suited for concurrent,
deteterministic communication. Probably I would need to use a
"hardware" concept such as a port for this purpose.

And indeed, by encapsulating each clocked always block in a module,
the races were gone as I had expected. I have never seen it
otherwise. My mental model definitely is that an output port
inherently has the "right" hardware semantics - whether it is
driven from a reg with a blocking assignment or not.

Steven Sharp from Cadence has confirmed me that Verilog-XL was
indeed consistent with this model, and also that NC-Verilog is.

I checked recent version of the synthesis manuals from Synopsys
(1999.05) and Exemplar (1999.01). I invite everyone to have a
look. You will see that *still today*, register inference and
state machine descriptions are explained with blocking
assignments only.

Well, all of this tells me that I might not be the only one
with this model. A few other people may be up to a big surprize
sooner or later.

Regards, Jan

--
Jan Decaluwe           Easics              
Design Manager         System-on-Chip design services  
+32-16-395 600         Interleuvenlaan 86, B-3001 Leuven, Belgium



Sun, 09 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard
I think Jan's been getting rather a bad press out of this. As to how
Verilog-XL actually did, or didn't work, we've had a few opinions.
Cliff asked Cadence, and they more-or-less said that module
encapsulation worked, but wasn't guaranteed. Shalom and Paul have
found the original documentation, and the small print says that there
might be a problem with ordering in some circumstances. I can't
believe that everybody, or even most people, who wrote code between
'87 and '92 (or whenever) put in delays on assignments for synchronous
elements. In short, the correct coding style was massively unclear,
and there's lots of legacy code which may not now work, but did
previously work. The IEEE standard did nothing for this code base, and
introduced another mechanism to sort out races. It's a fact that the
standard is not backwards-compatible with what Verilog-XL actually did
at the time, whether or not Gateway actually wrote any guarantees into
the manual. End of story.

Evan



Sun, 09 Mar 2003 03:00:00 GMT  
 The horrible truth about the Verilog standard
Yes, I noticed this a long time ago, and intended to write Synopsys about it.
Probably I never did it, due to the large number of tasks I am involved in.

Anyway, the Synopsys HDL Compiler for Verilog manual you refer to is
certainly not a justification because:

(1) The original manual was written before there were non-blocking assignments.
They simply never updated the manual.

(2) The manual there describes a single flip-flop, not a flip-flop chain.
More specifically, the manual tells you that if you write in a certain way, it will infer a flip-flop.

(3) The same manual, on pages 5-11 to 5-13 (at least in 1999.10 version), describes the difference
between blocking assignments and non-blocking assignments, and shows how, in certain cases at least,
blocking assignments may result in a non-serial register implementation.

(4) I have a Synopsys document which states that all storage elements, both flip-flops and latches,
should be written with non-blocking assignments.

Shalom

Quote:

> I checked recent version of the synthesis manuals from Synopsys
> (1999.05) and Exemplar (1999.01). I invite everyone to have a
> look. You will see that *still today*, register inference and
> state machine descriptions are explained with blocking
> assignments only.

--

************************************************************************

Motorola Semiconductor Israel, Ltd.     Tel #: +972 9 9522268
P.O.B. 2208, Herzlia 46120, ISRAEL      Fax #: +972 9 9522890
http://www.motorola-semi.co.il/
************************************************************************



Sun, 09 Mar 2003 03:00:00 GMT  
 
 [ 19 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Q: truth table input to verilog module

2. Verilog ***** Verilog ***** Verilog ***** Verilog

3. Standard Verilog Benchmark Design

4. Is there a Verilog RTL synthesis standard?

5. Verilog using LPM standards

6. coding standards for verilog

7. OVI 2.0 + standards and verilog models

8. Standard comments for verilog ie veridoc ?

9. Verilog 2001 (1364-2001 IEEE Standard) Question

10. Gate level primitives in IEEE 1364 Verilog standard...

11. Verilog standards - status?

12. Verilog is now IEEE 1364 standard

 

 
Powered by phpBB® Forum Software