Recent article on design verification 
Author Message
 Recent article on design verification

Tonight I was thumbing through an article on design verification
in "Integrated System Design" by Saunders and Trivedi and have
a few comments.

1) First of all, this is the first time I've heard a DV environment
referred to as a "testbench". I've been doing DV for 15 years,
focussing exclusively on DV for the past 10. I usually call it
an design verificaton environment or a test suite for short.

2) They "recommend highly" that the "testbench" be written in the
same language as the "DUT". Whoa. Has anyone ever tried to write
an expert system in verilog? Depending on the circuit complexity,
that's exactly what you'll need if you have a fair number of
stimulus constraints. Verilog is not a high level language. It's
perhaps a 2.5 generation language, somewhere between assembly and
C.

The front end of the environment requires stimulus generation, which
can be very tricky if you have resource constraints in your design
( and what *real* design doesn't ). This can be as simple as a queue,
or as complex as an expert system. The penalty for coding this in
HDL for complex patterns is double: You are forced to code in a crude
language, introducing numerous bugs in the process; and: You are forced
to debug your stimulus generator while running a live simulator ( by
"live", let's say one that eats a licence ).

The back end of the environment requires results generation and matching.
This too can be tricky if your design is pretty complex. The reference
model should also be as abstract as possible. I would advocate here
separating the functional aspects of the reference model from the performance
aspects. A good functional reference model can be used during stimulus generation
to reduce non determinism, a source of added complexity and bugs.

The reference model, almost by definition, must be less complex than the
actual design, therefore, some complexity is shifted to the matcher,
which must decide what constitutes a good fit between reference and actual
results.

I would highly recommend using the highest level language available. Personally,
I use a great deal of perl5, and use verilog only for primitive transactors
and monitors. I can debug the environment quickly, because I don't need to
replay the simulation.

3) They brushed over the need to perform random tests. I advocate highly the
use of random tests for two reasons: Statistically, the probability is
magnitudes greater that you will forget to test something than it is to
miss something through random testing. The use of weighting is absolutely
essential to test efficiency. A good weighting scheme, such as a markov
chain, is like having a compression algorithm on your test generator.
The target for uniform distribution is the design, not the stimulus, and
it requires a thorough knowledge of the implementation. Okay, it's an
introductory treatise, but I thought it worth mentioning.

4) I hope they mention next issue about the need to ascend through scale
during verification. Verify the little things before verifying big things.
A bottom-up approach is absolutely essential in DV. The system level is
an extremely bad place to be finding bugs. The turnaround time can be
days or even weeks as opposed to minutes.

5) I also hope they mention that DV is a big chunk of the time to market
for any project, and that creative ways of performing concurrent DV
( such as ascension through scale ) can save lots of time. The cost
tradeoff is nearly zero - either build good tools at the small scale,
or spend lots of time doing system simulation and finding bugs.

Let me conclude by saying this: I thank the authors for writing this
article on a badly neglected subject. Not too long ago Scientific
American had an article on the "software reliability crisis", which
I think applies equally to complex silicon ( the P5 FDIV bug springs
immediately to mind ). I hope it will be a continuing series. My
intent with this post is to bring out more of the complexities and
subtle considerations.

                                        John Williams



Tue, 23 Sep 1997 03:00:00 GMT  
 Recent article on design verification
: Tonight I was thumbing through an article on design verification
: in "Integrated System Design" by Saunders and Trivedi and have
: a few comments.

: 1) First of all, this is the first time I've heard a DV environment
: referred to as a "testbench". I've been doing DV for 15 years,
: focussing exclusively on DV for the past 10. I usually call it
: an design verificaton environment or a test suite for short.

I have called it a "testbench" for some time.  I think the VHDL
community has adopted this term too.

: 2) They "recommend highly" that the "testbench" be written in the
: same language as the "DUT". Whoa. Has anyone ever tried to write
: an expert system in verilog? Depending on the circuit complexity,
: that's exactly what you'll need if you have a fair number of
: stimulus constraints. Verilog is not a high level language. It's
: perhaps a 2.5 generation language, somewhere between assembly and
: C.

Certianly there are things that are hard to write in Verilog.  But
for those cases where the language is usable, it really helps
productivity to use one language for both the testbench and the DUT.
Otherwise the designer has to switch back and forth.

: 4) I hope they mention next issue about the need to ascend through scale
: during verification. Verify the little things before verifying big things.
: A bottom-up approach is absolutely essential in DV. The system level is
: an extremely bad place to be finding bugs. The turnaround time can be
: days or even weeks as opposed to minutes.

Yes.

: 5) I also hope they mention that DV is a big chunk of the time to market
: for any project, and that creative ways of performing concurrent DV
: ( such as ascension through scale ) can save lots of time. The cost
: tradeoff is nearly zero - either build good tools at the small scale,
: or spend lots of time doing system simulation and finding bugs.

"

        Don Reid                HP - ICBD - Product Design - 2UHH32
        715-2726 (telnet)       1050 NE Circle Blvd.



Sat, 27 Sep 1997 03:00:00 GMT  
 Recent article on design verification

Quote:

> Let me conclude by saying this: I thank the authors for writing this
> article on a badly neglected subject. Not too long ago Scientific
> American had an article on the "software reliability crisis", which
> I think applies equally to complex silicon ( the P5 FDIV bug springs
> immediately to mind ). I hope it will be a continuing series. My
> intent with this post is to bring out more of the complexities and
> subtle considerations.

>                                         John Williams

You know how it is being a DV type. Thin of an design team as a football team,
think of the designers as the quarterbacks, yeah they do some neat stuff but
they do need the rest of the squad.

As for your comments on environments etc, I agree wholeheartedly, what the
hell is a test bench, I first started hearing that mess from LMC when
describing their source-mode pci model environment.

The methodology you describe is precisely what we have implemented for our
environment

1) Directed testing at the subsystem (chip leve)

2) Random testing at the same

3) random testing for the system level

4) Gate level testing (not nearly the same number of cycles) in our
   random environment for the system, mostly to make sure there are no
   interconnect funnies wihtin the chis

5) "Tape Out" metrics are a combination of gut-feel, bug-rate and of course
   management pressure to tape out sooner than you know you should



Mon, 29 Sep 1997 03:00:00 GMT  
 Recent article on design verification

Quote:

>Tonight I was thumbing through an article on design verification
>in "Integrated System Design" by Saunders and Trivedi and have
>a few comments.

>1) First of all, this is the first time I've heard a DV environment
>referred to as a "testbench". I've been doing DV for 15 years,
>focussing exclusively on DV for the past 10. I usually call it
>an design verificaton environment or a test suite for short.

I believe "testbench" is a VHDL term.

Quote:
>2) They "recommend highly" that the "testbench" be written in the
>same language as the "DUT". Whoa. Has anyone ever tried to write
>an expert system in verilog? [...] Verilog is not a high level language. It's
>perhaps a 2.5 generation language, somewhere between assembly and
>C.

I agree, but would have left out the last sentence.  It is not on the same
"plane" as asm<->C.  It is not a complete language.

[...]

Quote:
>I would highly recommend using the highest level language available. Personally,
>I use a great deal of perl5, and use verilog only for primitive transactors
>and monitors. I can debug the environment quickly, because I don't need to
>replay the simulation.

I thought I was the only one!  I too confess to using perl5 for high-level test
drivers/generators.  I have socketed perl5 to Verilog and it works great.
If only perl5 was concurrent (though I have emulated that).

I would have liked to be able to use Verilog fur the fuill test environment,
but it lacks high-level constructs that I once, naively, thought unessecary in
an HDL.  Perl5 brings classes, OO-ness, and crazy super-high-level constructs
that are impossible to read, but easy to write and are pretty fast (as fast
as Verilog-XL).  And, it is trivial to model large memory and caches in perl5.

Quote:
>3) They brushed over the need to perform random tests. I advocate highly the
>use of random tests for two reasons: Statistically, the probability is
>magnitudes greater that you will forget to test something than it is to
>miss something through random testing. The use of weighting is absolutely
>essential to test efficiency. A good weighting scheme, such as a markov
>chain, is like having a compression algorithm on your test generator.
>The target for uniform distribution is the design, not the stimulus, and
>it requires a thorough knowledge of the implementation. Okay, it's an
>introductory treatise, but I thought it worth mentioning.

I would take this a step further and say that "noise" generators are a crutial
part of any test suite.  Often, models design to spec but forget to test
what *isn't* spec'ed.  Error conditions then break the hardware.  Random
noise flushes a lot of this out as well as the parts of the specs that the
modeler overlooked or forgot.  I would run tests even without the markov chain
optimization to flush out the first set of forgetton reactions.

[Other points are also well-taken and well-said.]

--
Elliot Mednick                                      P.O. Box 150
Wellspring Solutions, Inc.                          Sutton, MA.  01590
                                                    (508) 865-7271



Tue, 30 Sep 1997 03:00:00 GMT  
 Recent article on design verification
Quote:

>Tonight I was thumbing through an article on design verification
>in "Integrated System Design" by Saunders and Trivedi and have
>a few comments.

>1) First of all, this is the first time I've heard a DV environment
>referred to as a "testbench". I've been doing DV for 15 years,
>focussing exclusively on DV for the past 10. I usually call it
>an design verificaton environment or a test suite for short.

Me too. Although it has been called "a simulation environment" also.
What a rose by any other name .... ;-)
Quote:

>2) They "recommend highly" that the "testbench" be written in the
>same language as the "DUT". Whoa. Has anyone ever tried to write
>an expert system in verilog? Depending on the circuit complexity,
>that's exactly what you'll need if you have a fair number of
>stimulus constraints. Verilog is not a high level language. It's
>perhaps a 2.5 generation language, somewhere between assembly and
>C.

I just finished verifying a 6.5 Million device IC called the PPC604+ and
was previously verifying the PPC604. Although we weren't enlightened
enough to use Verilog for our design environment (that's a different
story). We used a combination of C and the HDL for our testbench. The
HDL portions were used to model a system environment, the other portions
of the environment (reference model, random test generation, test  
loading, tesct checking, model checking) were done in C or C++. IMHO,
even putting the system in the HDL was a mistake. It was difficult to
maintain and more difficult to understand. The capabilities were limited
by the HDL and it was almost impossible to enhance from project to  
project. If I ever have it to do over the system environment will
just be an HDL shell and everything else will be done in C/C++. Everyone  
should learn these languages anyway.  

Quote:
>3) They brushed over the need to perform random tests. I advocate highly  
the
>use of random tests for two reasons: Statistically, the probability is
>magnitudes greater that you will forget to test something than it is to
>miss something through random testing. The use of weighting is  
absolutely
>essential to test efficiency. A good weighting scheme, such as a markov
>chain, is like having a compression algorithm on your test generator.
>The target for uniform distribution is the design, not the stimulus, and
>it requires a thorough knowledge of the implementation. Okay, it's an
>introductory treatise, but I thought it worth mentioning.

We rely highly on random testing. It won't produce first pass functional
silicon by itself for complex projects. I ran 6 billion random cycles
on my last model and I am sure I will have bugs when silicon comes back.
Yes I said 6 billion! ;-)
Quote:
>4) I hope they mention next issue about the need to ascend through scale
>during verification. Verify the little things before verifying big  
things.
>A bottom-up approach is absolutely essential in DV. The system level is
>an extremely bad place to be finding bugs. The turnaround time can be
>days or even weeks as opposed to minutes.

I agree but disagree. IMHO the libraries of devices must be known to be  
correct. This is a good job for formal verification (I just love using  
buzz words). We do a much better turn time than this which has alot to do  
with using a different methodology than event driven simulation. We  
simulate at about 30 to 60 clocks per second on an RTL model using normal  
RS6000 cpu's. This lets us simulate more at the system level and less at  
the unit level.  
Quote:
>5) I also hope they mention that DV is a big chunk of the time to market
>for any project, and that creative ways of performing concurrent DV
>( such as ascension through scale ) can save lots of time. The cost
>tradeoff is nearly zero - either build good tools at the small scale,
>or spend lots of time doing system simulation and finding bugs.

DV (if we include timing and test not just logic) is the design project.  
The design entry is only a small portion of the effort.
Quote:
>Let me conclude by saying this: I thank the authors for writing this
>article on a badly neglected subject. Not too long ago Scientific
>American had an article on the "software reliability crisis", which
>I think applies equally to complex silicon ( the P5 FDIV bug springs
>immediately to mind ). I hope it will be a continuing series. My
>intent with this post is to bring out more of the complexities and
>subtle considerations.

>                                    John Williams

iThanks for the post - DV doesn't get the attention it needs until a bug  
like the FP in the Pentium shows up.


Wed, 01 Oct 1997 03:00:00 GMT  
 Recent article on design verification
: >I would highly recommend using the highest level language available. Personally,
: >I use a great deal of perl5, and use verilog only for primitive transactors
: >and monitors. I can debug the environment quickly, because I don't need to
: >replay the simulation.

: I thought I was the only one!  I too confess to using perl5 for high-level test
: drivers/generators.  I have socketed perl5 to Verilog and it works great.
: If only perl5 was concurrent (though I have emulated that).

: I would have liked to be able to use Verilog fur the fuill test environment,
: but it lacks high-level constructs that I once, naively, thought unessecary in
: an HDL.  Perl5 brings classes, OO-ness, and crazy super-high-level constructs
: that are impossible to read, but easy to write and are pretty fast (as fast
: as Verilog-XL).  And, it is trivial to model large memory and caches in perl5.

I agree with Elliot and John that Verilog HDL isn't suitable for many
(most) verification jobs.  I'd like to start a discussion on the
relative merits of Perl vs. C for these tasks, as well as how these
programs are used.

1. Do you use Perl or C/C++ for specialized code?  Why?  Familiarity
   with one vs. the other?  Learning curve?  Not having to compile
   perl scripts?  Speed/memory?

2. What types of programs are these?  Stimulus generators?  Result
   comparators?  Does it take longer to write the custom code than to
   design the hardware?

3. Do the programs run in lock step with the simulator, or are they
   pre/post processors?

4. How do you "connect" your programs to the simulator?
        PLI interface?
        Sockets (fork() or connect()? )

I'll start off with my own answers:

1. Primarily C and C++.  Some perl for scripts before and after a
   simulation (massaging data formats).  We felt that speed was
   important, and that a compiled language provided it.  Also, most of
   the people writing software were/are more proficient in C/C++.

2. Stimulus generators and result comparators are the biggest chunks.

3. Both.

4. PLI.  Big disadvantage is having to recompile the Verilog-XL binary
   every time we make a change.  Big advantage is not having to worry
   about large numbers of processes{*filter*} around, especially if the
   simulation dies unexpectedly.

   However, keeping the binary "cleaner" by running most of the
   ancillary programs as separate processes can simplify software
   debugging, be it your problem or the simulation vendor's.

I look forward to everyone's responses.

        Paul Tobin



Fri, 03 Oct 1997 03:00:00 GMT  
 Recent article on design verification

: : >I would highly recommend using the highest level language available. Personally,
: : >I use a great deal of perl5, and use verilog only for primitive transactors
: : >and monitors. I can debug the environment quickly, because I don't need to
: : >replay the simulation.

I've successfully used Verilog to code drivers/monitors for testbenches.

Of course I developed my own techniques for simulating features found in
higher-level languages like C and C++.  Now let's give Verilog a little
credit.  It has some similarities to an object-oriented language like C++.
Think about it!  A C++ class is used to encapsulate the data and methods
associated with a particular object.  A Verilog module can be used
to do the same thing.  A Verilog module is instanciated just like a
C++ class.  For example, I've created a Verilog module/class to do
pointer tracking on a SONET data stream.  I've got modules that handle
multi-dimensional arrays, data structures, FIFO's etc.

I like having the monitors written in HDL because this gives me better
control when I'm trying to write automated regression tests.  Post-
processing the simulation results using a C program doesn't help me
locate the cause of a problem.  A testbench that can detect a problem
while the simulation is running (and notify me, or possible stop so
I can do some debugging) is of greater value and saves time.

The key to writing productive testbenches in Verilog is to get out
of the "hardware mindset" and start viewing a Verilog module as a
C++ object.  For those skeptics out there who think their problem
is more complicated than mine, consider writing a monitor for a SONET
STS-12 data stream.  This stream contains 12 STS-1 data streams, each
containing a pointer to a payload. Each payload can contain 28 pointers
to another payload type, which can contain a DS1 data stream which can
contain a data link channel which can... (you get the picture).
I've got a monitor that can do this.  I think it was easier to write
it in Verilog than C/C++.

I recently gave a class at Alcatel demonstrating some of the techniques
I've developed over the years.  If there is any interest I could
compose something for this forum.

Chris

================================================================
       Chris Starr                             P.O. Box 68185    
  ASIC EDA Consultant                        Raleigh, NC  27613  

        +---------------------------------------------+          
        +       System-Level Verification Tools       +          
        +   ASIC Modeling, Verilog Drivers/Monitors   +          
================================================================



Fri, 03 Oct 1997 03:00:00 GMT  
 Recent article on design verification

Quote:


[...snip...]
>I agree with Elliot and John that Verilog HDL isn't suitable for many
>(most) verification jobs.  I'd like to start a discussion on the
>relative merits of Perl vs. C for these tasks, as well as how these
>programs are used.

I also agree!
Quote:

>1. Do you use Perl or C/C++ for specialized code?  Why?  Familiarity
>   with one vs. the other?  Learning curve?  Not having to compile
>   perl scripts?  Speed/memory?

Yes, all the above plus some less obvious reasons, such as we
may be actually linking together a bunch of different simulation
engines.
Quote:

>2. What types of programs are these?  Stimulus generators?  Result
>   comparators?  Does it take longer to write the custom code than to
>   design the hardware?

Stimulus generators, Result comparators, cycle based simulators...
Quote:

>3. Do the programs run in lock step with the simulator, or are they
>   pre/post processors?

Both.

>4. How do you "connect" your programs to the simulator?
>    PLI interface?

Some programs are PLI and use socket datagrams, fork, exec... Others
exec verilog and wait for the simulation to finish.

Regards,
Mark
--

/* MOTOROLA   Strategic Semiconductor Operation, IC Technology Laboratory */
/* Mail Stop 63, 1500 Gateway Boulevard, Boynton Beach, FL 33436-8292 USA */
/* phone: 1-407-739-2379, fax: 1-407-739-3904    ...just speaking for me! */



Mon, 06 Oct 1997 03:00:00 GMT  
 Recent article on design verification

Quote:

>I've successfully used Verilog to code drivers/monitors for testbenches.
>[stuff deleted]
>I recently gave a class at Alcatel demonstrating some of the techniques
>I've developed over the years.  If there is any interest I could
>compose something for this forum.

>Chris

>================================================================
>       Chris Starr                             P.O. Box 68185    
>  ASIC EDA Consultant                        Raleigh, NC  27613  

>        +---------------------------------------------+          
>        +       System-Level Verification Tools       +          
>        +   ASIC Modeling, Verilog Drivers/Monitors   +          
>================================================================

Please do pass along more of what you have done in the area of design
verification.  This would be a great topic for the next IVC as a paper or
tutorial as well.  I look forward to hearing more...

--
-- Stuart Sutherland                       Sutherland HDL Consulting --

-- phone (303) 682-8864                    Longmont, CO  80503       --
-- FAX:  (303) 682-8827
--                                                                   --
-- Training & consulting for Verilog HDL, PLI, Synthesis and tools,  --  
-- Publisher of popular "Verilog HDL 2.0 Language Reference Guide"   --
-----------------------------------------------------------------------



Thu, 09 Oct 1997 03:00:00 GMT  
 Recent article on design verification
If I were to summarize the task of design verification, I would say it
was "observation". The value added in design verification is proportional
to how much the environment allows you to see what the design does under
various conditions. Complexity is the currency when evaluating different
verification strategies. In general you want the simplest environment
possible, but one where bugs can be found and fixes checked quickly.

I have used many approaches - verilog execution models, PLI model substrates
( memory and state are modeled in PLI to enable fast global checking ), and
Perl. Usually I end up with a hybrid of the three:

1) Monitors: A good protocol monitor can save you alot of aggravation by the
time you reach system testing. I am convinced that one should not expect to
find bugs at the system level. A good monitor will assure you that the blocks
of the design will integrate cleanly. These are typically written in verilog if
the protocol is simple, or in perl if the protocol is complex.

2) Test Generators: It's awfully hard to beat Perl. If you don't jump immediately
to Perl, it's usually because the function is so simple that feeding the test to the
design via PLI or some other mechanism ( like $readmemh ) is more complicated.
I've used yacc for this before, but aside from looking a little prettier and being
a little more syntactically more forgiving, it offers no real advantages.

3) Reference Models: Big can of worms here. I would recommend splitting the behavi{*filter*}
model into separate functional and performance components. For example, I'm using
a functional model written in C ( written by and used by software engineers ), a
performance model written in perl5 ( that's used to qualify that test patterns meet
certain constraints ), and various performance checker scripts written in perl
that check for particular performance critical properties ( cache fills, interrupt
latencies, etc. ). I would say that the more you can partition the reference model
in general, the better. What you *DO* want to avoid is a huge model that's more
complex than the real thing. Lots of little models is preferable where possible.

4) Automation: Perl, Perl, Perl. I make it a point never to do or look at the same
thing twice, and to look at everything at least once.

                                        John Williams



Sat, 11 Oct 1997 03:00:00 GMT  
 Recent article on design verification
I am currently involved in design verification of the PowerPC
microprocessors. In my frame of reference, the current methods
and vendor supplied simulation tools are not sufficient for
the complexity of designs being considered.

Currently the "state of the art" methodologies consist of
random test generation with a mix of "best guess" test suites.
In the random test method, a set of instructions is randomly
generated and then ran against a reference model to generate
a set of results. The instructions are then ran against the
model and the results are compared. The implementations of this
are as varied as there are implementors but the methods fall in this
category. Some implement the generation, reference model and comparison  
as part of the simulation model, some implement them seperately, to some  
it is a mix. The metric for being done with random testing is not
satisfying, it is usually so many of something ... 1 billion  
instructions, 1 billion cycles, 1 billion tests but you're never sure
if you covered that particular state transition or if you divided
some astronomically large number by some small number in your floating  
point. Can you cross your fingers? By the way, random testing requires  
more CPU time than you will ever have for any large problem especially  
with a simulator like Verilog.

So most of the time we verification types make our best guess at the  
"boundary conditions" for particular problems and the write specific  
tests for them. Things like a miss in the cache while my MMU is doing a  
tablewalk and another processor is hitting on that block int the same  
cycle. This is also not satisfying because you know you missed something.
The fingers on the other hand are now crossed. For any large problem  
these tests can take many people and hours to write.

On the one hand, you have a methodolgy (random) that is CPU intensive but  
requires fewer people and on the other hand you have a method with  
requires many people and fewer CPU resources. Most DV folk use both  
methods.

My problems with the current vendor simulation tools are they are just  
too slow for large problems using random instruction sequences of any
significent length. I require a minimum of 30 instructions per second
to be executed by my model. On a reasonable number of machines that will
give me 200 million random cycles per day. That is what is required for
a complex design. This can be done by allowing the simulator to have
only two logic values - 0 or 1 and by levelizing the logic (no feedback).

Another significent problem is the ability of a simulator to give me  
information about what parts of the design have been covered (no I don't  
mean stuck at fault grading). I mean functional coverage analysis. In  
software they define a heirarchy of coverage metrics. This mostly boils  
down to path coverage or some subset of it. You see, if I have a system  
that is randomly generating tests that cover the same functions over and
over I probably should re-bias the generator. Most people measure this
by the number of fails - no fails means re-bias. This directs the testing  
and is utilized in the later stages of verification.

The advent of formal verification may prove to alleviate many of the  
simulation problems by allowing specification languages, model checkers  
and theorem provers do perform the verification of an implementation.  



Sun, 12 Oct 1997 03:00:00 GMT  
 Recent article on design verification

Quote:

> 2) Test Generators: It's awfully hard to beat Perl. If you don't

(etc)

Quote:
> 4) Automation: Perl, Perl, Perl. I make it a point never to do or
> look at the same thing twice, and to look at everything at least
> once.

(etc)

I just learning Perl, and fairly new to Verilog.  I'm confused about
how you join the two together.  Some things, like parsing verilog
log files with Perl, are obvious.  Other things, like stimulus,
I'm not sure I follow exactly how you are doing this.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-



Sun, 12 Oct 1997 03:00:00 GMT  
 Recent article on design verification

Quote:

> > I just learning Perl, and fairly new to Verilog.  I'm confused about
> > how you join the two together.  Some things, like parsing verilog
> > log files with Perl, are obvious.  Other things, like stimulus,
> > I'm not sure I follow exactly how you are doing this.

> In my case I generate random instruction streams that obey certain
> resource constraints of the processor.

Actually I think the thing he was getting at was 'how do you get
perl to interact with verilog?'

There are a couple of ways, the simple one is to use perl to create stimulus files
that are read by Verilog. The more interactive way is to use perl to
open tcp sockets to talk to a Verilog process (via PLIs to read/write the
sockets).

        Paul Campbell



Fri, 17 Oct 1997 03:00:00 GMT  
 
 [ 13 post ] 

 Relevant Pages 

1. Recent article on design verification

2. Recent article on design verification

3. Recent articles on Python

4. Recent article in UniNews

5. Recent CACM "viewpoint" article

6. recent article on Fortran and C

7. Question regarding a recent article on informit.com

8. Yourdan's recent article on SM

9. Postdoc vacancy for multi-rate / protocol IC-design verification

10. B02 LOGIC DESIGN AND VERIFICATION V5.0 new !

11. Transition from pre-design to pose verification

12. ASIC Design / Verification Engineers looking for job

 

 
Powered by phpBB® Forum Software