Cadence Verilog on Linux experiences 
Author Message
 Cadence Verilog on Linux experiences

A few weeks ago, I set up an experimental Linux (Pentium3) box (Redhat7)
with Cadence's LDV32 (for us, that's Verilog-XL and NC-Verilog, and
Signalscan waves.)  Our other compute servers consist of a variety of
Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to Blade-1000 2/750.
Here are my observations :

Redhat Linux 7.0 -
  lots of nice free "goodies" come standard out of the box, like EMACS,
    RCS, CVS, GHOSTSCRIPT, etc.   These are all publically available for
    Sun Solaris, but you have to download them yourself :)
  very bad NFS client performance out of the box, almost unusable for
    large files.  Performance problems were fixed by downloading newer
    kernel from updates.redhat.com, and manually forcing
      NFS3, TCP, r/wsize=32768, etc.

Ok, now for the important part...how well did LDV32-Linux fare against
  LDV32-Solaris 2.8?  The linux box is a Pentium3-1000MHz with 1024MB
  PC133 CAS2 ECC SDRAM, ASUS P3V4X motherboard, IBM 7200rpm 13.6GB
ATA66,
  Intel Etherexpress 100B/pro (82559)

  For trivially small RTL designs, < 50Kgates, whose memory footprint
  is < 10MB, the Pentium3 held up *very* well.  In both Verilog-XL and
  NC-Verilog, the P3/1000 matched or exceeded the Sun Blade-1000 750MHz!
  Obviously, as the simulation environment grows, the P3's 256KB cache
  starts thrashing.  For the 'average ASIC' RTL design, like 500K gates,
  with RAM models, etc., the Pentium3 slows down tremendously.

  For our RTL design simulations, the P3/1000 performed similarly to a
  Sun Ultra 60 2/360 (360MHz.)  For some sims, the P3/1000 was
marginally
  (up to 10% faster), for a few sims the P3/1000 was marginally slower
  (-5%.)  When running, the simulation database has a RAM footprint of
  roughly 220MB.  The Blade-1000 750MHz was at least 50% faster,
sometimes
  up to 60-70% faster (depending on the simulation.)  We have different
  simulations which exercise different portions of the ASIC logic.

Moving to gate-level (with SDF-back annotation), the story changes
dramatically...  in all simulations that I ran, the P3/1000 nearly
matched
the Blade-1000 2/750.  The RAM-footprint increases to 470MB.  A
gate-level
sim which runs for 50-minutes on the Blade1000, runs for 55-minutes on
the Linux-box.  I'm guessing the extreme amount of timing-checks and
whatnot overpower the Blade-1000's larger L2 cache (8MB).  I expected
the Blade-1000's superior I/O performance to outrun the
meager PC133-SDRAM on the Pentium3, but surprise.  

The Ultra60 2/360 and Ultra2 2/300 are left in the dust.  Since we
got the Blade1000, we no longer use the older Suns for gate-level sims.
I should point out the Ultra60 and Ultra2 are running the much
older LDV22.  Both have Solaris 2.6, which prevents them from running
LDV32. (I don't administer the Sun systems, so I cannot control this.)
run-time is roughly +50% longer on the Ultra60.  I attribute this to
the older LDV22, and not the Ultra60 itself.  LDV22 burns about
50minutes compiling/elaborating prior to the actual start of
sim execution (compared to 10mins on LDV32-Linux.)

Oh... both the Blade1000 and the Linux-box crashed with some first
attempts to run gate-level sims.  I had to remove
'+multisource_int_delays', otherwise LDV32 would crash during
elaboration.  With LDV22, the same option works fine (well it
generates a lot of warning messages, but no program crashes.)  For
fair benchmarking comparison, I removed this SDF-option from our
environment file.

Linux (Redhat7, anyway) appears to use all available physical RAM as a
large "file cache."  (Our simulation environment is very primitive -
we launch all Verilog-simulations with the "ncverilog" command.  There
are lots of potential gains from intelligent use of
ncvlog/ncelab/ncsim, but we're not setup like that yet.)  The linux
box elaborated/annotated the design up to 50% faster than the Blade1000.
This despite the fact the Blade1000 has 5GB RAM.  I'm convinced the
Linux box 'cheats' heavily by using that file cache.  Whereas the
Linux box starts ncsim execution about 10minutes after the ncverilog
command is issued, the Blade1000 usually starts after 15-20minutes.
In both cases, a precompiled ncsdfc SDF.X was available beforehand
(so this tests the ncvlog->ncelab->ncsim progress rate.)  Again,
intelligent use of ncsim/ncelab/ncsim command-line tools (instead of
monolithic 'ncveirlog') could eliminate a lot of redundant recompile.
As a point of reference, the Ultra60 2/360 (with LDV22) burns almost
50minutes just compiling/elaborating...a compelling reason to
upgrade to LDV32!

Oh, the dual-CPU Blade-1000 2/750 (dual 750MHz) can run up to 2
NC-verilog
jobs simultaneously.  A single instance of NC-Verilog does not appear to
benefit from dual-CPUs - Running just one job (load=1.00) versus two
jobs
(load=2.00) resulted in nearly identical runtimes for the same sim.

....

Overall, the Linux box really impressed me, though not without its share
of problems.

For one thing, Redhat7's lousy NFS performance initially prevented it
from running any gate-level sims whatsoever.  Loading a 20MB file over
an NFS partition stalled the linux box for 5 minutes!  I eventually
fixed this with kernel upgrade to 2.2.19-7.8.0.8, and the appropriate
parameters to the mount command (nfsvers=3, tcp, rsize=32768,
wsize=32768)  Despite these measures, the Redhat7 box experienced
file corruption twice over a period of 2 weeks.  The first time
was probably due to improper parameters to the mount command.  The
second time occured soon after one NFS-server was taken offline
(for OS upgrade), and then rebooted.  The linux-box's NFS-client
didn't like the service interruption.  None of our Sun solaris
machines experienced file-corruption.  As a matter of fact,
the Sun systems have been running for 2-3 years now (with several
reboots in that time, of course), only 1 suspected instance of
file-corruption on an RCS-controlled file.

As a sidenote, Cadence has only tested LDV32 with Redhat 6.1.
But I had no problems getting LDV32 to run under Redhat 7 (other
than that NFS problem.)  A few minor quirks here and there
(like $recordvars not working under Verilog-XL 3.2), but
no fatal showstoppers.

...

To summarize :
  Well that's it.  My opinion is that Cadence Verilog-XL and
NC-Verilog (under Linux) are ready for widespread production use.
If you install and run these tools, you won't be unwittingly
a "beta-tester" for Cadence, by no means.  Well at least not
according to what I saw in my limited testing. (But I didn't
touch the PLI stuff at all!)
  On the other hand, the Linux operating-system still has a
few rough corners.  I saw two instances of NFS file corruption
(on reads from Sun NFS servers), in two weeks.  The Sun systems
experienced 1 instance of write corruption in 1.5 years, and
that write-corruption was RCS related (I hope!)
  Other than the NFS problem, Linux seemed to run as advertised.
I've used Windows all my life, so I'm used to rebooting for
almost no reason whatsoever.  I only had to reboot the linux
box once, and that's probably due to my inexperience.  (I couldn't
unmount a downed NFS volume, and it was easiest to just reboot.)
  Still, I'd only recommend Linux to people with good UNIX
backgrounds, or people who have access to professional admins
(like in a big company.)



Thu, 05 Feb 2004 01:12:03 GMT  
 Cadence Verilog on Linux experiences
Have you tried Redhat 7.1? I've been running nc_verilog 3.3 on 7.1 for a
while without any problems even though 7.1 isn't officially supported
yet. 7.1 uses the newer 2.4 kernel and NFSv3 is supported right out of
the box.

Josh

Quote:

> A few weeks ago, I set up an experimental Linux (Pentium3) box (Redhat7)
> with Cadence's LDV32 (for us, that's Verilog-XL and NC-Verilog, and
> Signalscan waves.)  Our other compute servers consist of a variety of
> Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to Blade-1000 2/750.
> Here are my observations :

> Redhat Linux 7.0 -
>   lots of nice free "goodies" come standard out of the box, like EMACS,
>     RCS, CVS, GHOSTSCRIPT, etc.   These are all publically available for
>     Sun Solaris, but you have to download them yourself :)
>   very bad NFS client performance out of the box, almost unusable for
>     large files.  Performance problems were fixed by downloading newer
>     kernel from updates.redhat.com, and manually forcing
>       NFS3, TCP, r/wsize=32768, etc.

> Ok, now for the important part...how well did LDV32-Linux fare against
>   LDV32-Solaris 2.8?  The linux box is a Pentium3-1000MHz with 1024MB
>   PC133 CAS2 ECC SDRAM, ASUS P3V4X motherboard, IBM 7200rpm 13.6GB
> ATA66,
>   Intel Etherexpress 100B/pro (82559)

>   For trivially small RTL designs, < 50Kgates, whose memory footprint is
>   < 10MB, the Pentium3 held up *very* well.  In both Verilog-XL and
>   NC-Verilog, the P3/1000 matched or exceeded the Sun Blade-1000 750MHz!
>   Obviously, as the simulation environment grows, the P3's 256KB cache
>   starts thrashing.  For the 'average ASIC' RTL design, like 500K gates,
>   with RAM models, etc., the Pentium3 slows down tremendously.

>   For our RTL design simulations, the P3/1000 performed similarly to a
>   Sun Ultra 60 2/360 (360MHz.)  For some sims, the P3/1000 was
> marginally
>   (up to 10% faster), for a few sims the P3/1000 was marginally slower
>   (-5%.)  When running, the simulation database has a RAM footprint of
>   roughly 220MB.  The Blade-1000 750MHz was at least 50% faster,
> sometimes
>   up to 60-70% faster (depending on the simulation.)  We have different
>   simulations which exercise different portions of the ASIC logic.

> Moving to gate-level (with SDF-back annotation), the story changes
> dramatically...  in all simulations that I ran, the P3/1000 nearly
> matched
> the Blade-1000 2/750.  The RAM-footprint increases to 470MB.  A
> gate-level
> sim which runs for 50-minutes on the Blade1000, runs for 55-minutes on
> the Linux-box.  I'm guessing the extreme amount of timing-checks and
> whatnot overpower the Blade-1000's larger L2 cache (8MB).  I expected
> the Blade-1000's superior I/O performance to outrun the meager
> PC133-SDRAM on the Pentium3, but surprise.

> The Ultra60 2/360 and Ultra2 2/300 are left in the dust.  Since we got
> the Blade1000, we no longer use the older Suns for gate-level sims. I
> should point out the Ultra60 and Ultra2 are running the much older
> LDV22.  Both have Solaris 2.6, which prevents them from running LDV32.
> (I don't administer the Sun systems, so I cannot control this.) run-time
> is roughly +50% longer on the Ultra60.  I attribute this to the older
> LDV22, and not the Ultra60 itself.  LDV22 burns about 50minutes
> compiling/elaborating prior to the actual start of sim execution
> (compared to 10mins on LDV32-Linux.)

> Oh... both the Blade1000 and the Linux-box crashed with some first
> attempts to run gate-level sims.  I had to remove
> '+multisource_int_delays', otherwise LDV32 would crash during
> elaboration.  With LDV22, the same option works fine (well it generates
> a lot of warning messages, but no program crashes.)  For fair
> benchmarking comparison, I removed this SDF-option from our environment
> file.

> Linux (Redhat7, anyway) appears to use all available physical RAM as a
> large "file cache."  (Our simulation environment is very primitive - we
> launch all Verilog-simulations with the "ncverilog" command.  There are
> lots of potential gains from intelligent use of ncvlog/ncelab/ncsim, but
> we're not setup like that yet.)  The linux box elaborated/annotated the
> design up to 50% faster than the Blade1000. This despite the fact the
> Blade1000 has 5GB RAM.  I'm convinced the Linux box 'cheats' heavily by
> using that file cache.  Whereas the Linux box starts ncsim execution
> about 10minutes after the ncverilog command is issued, the Blade1000
> usually starts after 15-20minutes. In both cases, a precompiled ncsdfc
> SDF.X was available beforehand (so this tests the ncvlog->ncelab->ncsim
> progress rate.)  Again, intelligent use of ncsim/ncelab/ncsim
> command-line tools (instead of monolithic 'ncveirlog') could eliminate a
> lot of redundant recompile. As a point of reference, the Ultra60 2/360
> (with LDV22) burns almost 50minutes just compiling/elaborating...a
> compelling reason to upgrade to LDV32!

> Oh, the dual-CPU Blade-1000 2/750 (dual 750MHz) can run up to 2
> NC-verilog
> jobs simultaneously.  A single instance of NC-Verilog does not appear to
> benefit from dual-CPUs - Running just one job (load=1.00) versus two
> jobs
> (load=2.00) resulted in nearly identical runtimes for the same sim.

> ....

> Overall, the Linux box really impressed me, though not without its share
> of problems.

> For one thing, Redhat7's lousy NFS performance initially prevented it
> from running any gate-level sims whatsoever.  Loading a 20MB file over
> an NFS partition stalled the linux box for 5 minutes!  I eventually
> fixed this with kernel upgrade to 2.2.19-7.8.0.8, and the appropriate
> parameters to the mount command (nfsvers=3, tcp, rsize=32768,
> wsize=32768)  Despite these measures, the Redhat7 box experienced file
> corruption twice over a period of 2 weeks.  The first time was probably
> due to improper parameters to the mount command.  The second time
> occured soon after one NFS-server was taken offline (for OS upgrade),
> and then rebooted.  The linux-box's NFS-client didn't like the service
> interruption.  None of our Sun solaris machines experienced
> file-corruption.  As a matter of fact, the Sun systems have been running
> for 2-3 years now (with several reboots in that time, of course), only 1
> suspected instance of file-corruption on an RCS-controlled file.

> As a sidenote, Cadence has only tested LDV32 with Redhat 6.1. But I had
> no problems getting LDV32 to run under Redhat 7 (other than that NFS
> problem.)  A few minor quirks here and there (like $recordvars not
> working under Verilog-XL 3.2), but no fatal showstoppers.

> ...

> To summarize :
>   Well that's it.  My opinion is that Cadence Verilog-XL and
> NC-Verilog (under Linux) are ready for widespread production use. If you
> install and run these tools, you won't be unwittingly a "beta-tester"
> for Cadence, by no means.  Well at least not according to what I saw in
> my limited testing. (But I didn't touch the PLI stuff at all!)
>   On the other hand, the Linux operating-system still has a
> few rough corners.  I saw two instances of NFS file corruption (on reads
> from Sun NFS servers), in two weeks.  The Sun systems experienced 1
> instance of write corruption in 1.5 years, and that write-corruption was
> RCS related (I hope!)
>   Other than the NFS problem, Linux seemed to run as advertised.
> I've used Windows all my life, so I'm used to rebooting for almost no
> reason whatsoever.  I only had to reboot the linux box once, and that's
> probably due to my inexperience.  (I couldn't unmount a downed NFS
> volume, and it was easiest to just reboot.)
>   Still, I'd only recommend Linux to people with good UNIX
> backgrounds, or people who have access to professional admins (like in a
> big company.)



Fri, 06 Feb 2004 03:23:16 GMT  
 Cadence Verilog on Linux experiences
As a matter of fact, I have stayed away from Redhat 7.1 due to
rumored incompatibilities between Redhat 7.1 and Via's 686B southbridge
(ATA/100 IDE controller.)  Redhat 7.1 reverts to PIO mode access if
it detects a 686B, guaranteeing no data-corruption, but at a terrible
price!  slow ide access.

The ASUS P3V4X (which is in our Redhat 7 server) uses the older
Via 596B (ATA/66) controller, so I'm not sure whether I would be
affected,
but I decided to steer clear of this mess until I know for sure.  The
only annoyance with Redhat7's stock kernel (even the updated 2.2.19
kernel
RPM) is having to manually enable IDE DMA after each bootup.  I could
recompile the kernel, but I'd really prefer to use the precompiled
distribution.  The more things I keep "stock", the better!

How is the NFS3 performance/reliability of Redhat 7.1 so far?
Did you see any NFS corruption issues?

Quote:
> Have you tried Redhat 7.1? I've been running nc_verilog 3.3 on 7.1 for a
> while without any problems even though 7.1 isn't officially supported
> yet. 7.1 uses the newer 2.4 kernel and NFSv3 is supported right out of
> the box.

> Josh



Fri, 06 Feb 2004 12:29:59 GMT  
 Cadence Verilog on Linux experiences
I've never measured the NFS performance, I only use NFS for things like my
CVS directory. I did just do a big backup and restore (12Gbytes) so that
I could upgrade the disk on my laptop. I didn't experience any
corruption problems during the backup and restore but that's hardly
definitive.

Josh

Quote:

> As a matter of fact, I have stayed away from Redhat 7.1 due to rumored
> incompatibilities between Redhat 7.1 and Via's 686B southbridge (ATA/100
> IDE controller.)  Redhat 7.1 reverts to PIO mode access if it detects a
> 686B, guaranteeing no data-corruption, but at a terrible price!  slow
> ide access.

> The ASUS P3V4X (which is in our Redhat 7 server) uses the older Via 596B
> (ATA/66) controller, so I'm not sure whether I would be affected,
> but I decided to steer clear of this mess until I know for sure.  The
> only annoyance with Redhat7's stock kernel (even the updated 2.2.19
> kernel
> RPM) is having to manually enable IDE DMA after each bootup.  I could
> recompile the kernel, but I'd really prefer to use the precompiled
> distribution.  The more things I keep "stock", the better!

> How is the NFS3 performance/reliability of Redhat 7.1 so far? Did you
> see any NFS corruption issues?

>> Have you tried Redhat 7.1? I've been running nc_verilog 3.3 on 7.1 for
>> a while without any problems even though 7.1 isn't officially supported
>> yet. 7.1 uses the newer 2.4 kernel and NFSv3 is supported right out of
>> the box.

>> Josh



Mon, 09 Feb 2004 00:50:44 GMT  
 Cadence Verilog on Linux experiences

    i've run finsim's linux version under FreeBSD
    with great results.  i've head annecdotal reports
    of cadence's linux distribution also running under
    FreeBSD.

    historically, i've used freebsd over linux for reasons
    of both nfs performance and performance of large
    jobs, but linux has otherwise improved somewhat in
    both of these areas (both are good os's but freebsd
    has historically been noted for its stability and
    performance under stress).

    why not give the operating system that Yahoo runs
    a try ?  check out www.freebsd.org

        -elh

Quote:

> A few weeks ago, I set up an experimental Linux (Pentium3) box (Redhat7)
> with Cadence's LDV32 (for us, that's Verilog-XL and NC-Verilog, and
> Signalscan waves.)  Our other compute servers consist of a variety of
> Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to Blade-1000 2/750.
> Here are my observations :

> Redhat Linux 7.0 -
>   lots of nice free "goodies" come standard out of the box, like EMACS,
>     RCS, CVS, GHOSTSCRIPT, etc.   These are all publically available for
>     Sun Solaris, but you have to download them yourself :)
>   very bad NFS client performance out of the box, almost unusable for
>     large files.  Performance problems were fixed by downloading newer
>     kernel from updates.redhat.com, and manually forcing
>       NFS3, TCP, r/wsize=32768, etc.

> Ok, now for the important part...how well did LDV32-Linux fare against
>   LDV32-Solaris 2.8?  The linux box is a Pentium3-1000MHz with 1024MB
>   PC133 CAS2 ECC SDRAM, ASUS P3V4X motherboard, IBM 7200rpm 13.6GB
> ATA66,
>   Intel Etherexpress 100B/pro (82559)

>   For trivially small RTL designs, < 50Kgates, whose memory footprint
>   is < 10MB, the Pentium3 held up *very* well.  In both Verilog-XL and
>   NC-Verilog, the P3/1000 matched or exceeded the Sun Blade-1000 750MHz!
>   Obviously, as the simulation environment grows, the P3's 256KB cache
>   starts thrashing.  For the 'average ASIC' RTL design, like 500K gates,
>   with RAM models, etc., the Pentium3 slows down tremendously.

>   For our RTL design simulations, the P3/1000 performed similarly to a
>   Sun Ultra 60 2/360 (360MHz.)  For some sims, the P3/1000 was
> marginally
>   (up to 10% faster), for a few sims the P3/1000 was marginally slower
>   (-5%.)  When running, the simulation database has a RAM footprint of
>   roughly 220MB.  The Blade-1000 750MHz was at least 50% faster,
> sometimes
>   up to 60-70% faster (depending on the simulation.)  We have different
>   simulations which exercise different portions of the ASIC logic.

> Moving to gate-level (with SDF-back annotation), the story changes
> dramatically...  in all simulations that I ran, the P3/1000 nearly
> matched
> the Blade-1000 2/750.  The RAM-footprint increases to 470MB.  A
> gate-level
> sim which runs for 50-minutes on the Blade1000, runs for 55-minutes on
> the Linux-box.  I'm guessing the extreme amount of timing-checks and
> whatnot overpower the Blade-1000's larger L2 cache (8MB).  I expected
> the Blade-1000's superior I/O performance to outrun the
> meager PC133-SDRAM on the Pentium3, but surprise.

> The Ultra60 2/360 and Ultra2 2/300 are left in the dust.  Since we
> got the Blade1000, we no longer use the older Suns for gate-level sims.
> I should point out the Ultra60 and Ultra2 are running the much
> older LDV22.  Both have Solaris 2.6, which prevents them from running
> LDV32. (I don't administer the Sun systems, so I cannot control this.)
> run-time is roughly +50% longer on the Ultra60.  I attribute this to
> the older LDV22, and not the Ultra60 itself.  LDV22 burns about
> 50minutes compiling/elaborating prior to the actual start of
> sim execution (compared to 10mins on LDV32-Linux.)

> Oh... both the Blade1000 and the Linux-box crashed with some first
> attempts to run gate-level sims.  I had to remove
> '+multisource_int_delays', otherwise LDV32 would crash during
> elaboration.  With LDV22, the same option works fine (well it
> generates a lot of warning messages, but no program crashes.)  For
> fair benchmarking comparison, I removed this SDF-option from our
> environment file.

> Linux (Redhat7, anyway) appears to use all available physical RAM as a
> large "file cache."  (Our simulation environment is very primitive -
> we launch all Verilog-simulations with the "ncverilog" command.  There
> are lots of potential gains from intelligent use of
> ncvlog/ncelab/ncsim, but we're not setup like that yet.)  The linux
> box elaborated/annotated the design up to 50% faster than the Blade1000.
> This despite the fact the Blade1000 has 5GB RAM.  I'm convinced the
> Linux box 'cheats' heavily by using that file cache.  Whereas the
> Linux box starts ncsim execution about 10minutes after the ncverilog
> command is issued, the Blade1000 usually starts after 15-20minutes.
> In both cases, a precompiled ncsdfc SDF.X was available beforehand
> (so this tests the ncvlog->ncelab->ncsim progress rate.)  Again,
> intelligent use of ncsim/ncelab/ncsim command-line tools (instead of
> monolithic 'ncveirlog') could eliminate a lot of redundant recompile.
> As a point of reference, the Ultra60 2/360 (with LDV22) burns almost
> 50minutes just compiling/elaborating...a compelling reason to
> upgrade to LDV32!

> Oh, the dual-CPU Blade-1000 2/750 (dual 750MHz) can run up to 2
> NC-verilog
> jobs simultaneously.  A single instance of NC-Verilog does not appear to
> benefit from dual-CPUs - Running just one job (load=1.00) versus two
> jobs
> (load=2.00) resulted in nearly identical runtimes for the same sim.

> ....

> Overall, the Linux box really impressed me, though not without its share
> of problems.

> For one thing, Redhat7's lousy NFS performance initially prevented it
> from running any gate-level sims whatsoever.  Loading a 20MB file over
> an NFS partition stalled the linux box for 5 minutes!  I eventually
> fixed this with kernel upgrade to 2.2.19-7.8.0.8, and the appropriate
> parameters to the mount command (nfsvers=3, tcp, rsize=32768,
> wsize=32768)  Despite these measures, the Redhat7 box experienced
> file corruption twice over a period of 2 weeks.  The first time
> was probably due to improper parameters to the mount command.  The
> second time occured soon after one NFS-server was taken offline
> (for OS upgrade), and then rebooted.  The linux-box's NFS-client
> didn't like the service interruption.  None of our Sun solaris
> machines experienced file-corruption.  As a matter of fact,
> the Sun systems have been running for 2-3 years now (with several
> reboots in that time, of course), only 1 suspected instance of
> file-corruption on an RCS-controlled file.

> As a sidenote, Cadence has only tested LDV32 with Redhat 6.1.
> But I had no problems getting LDV32 to run under Redhat 7 (other
> than that NFS problem.)  A few minor quirks here and there
> (like $recordvars not working under Verilog-XL 3.2), but
> no fatal showstoppers.

> ...

> To summarize :
>   Well that's it.  My opinion is that Cadence Verilog-XL and
> NC-Verilog (under Linux) are ready for widespread production use.
> If you install and run these tools, you won't be unwittingly
> a "beta-tester" for Cadence, by no means.  Well at least not
> according to what I saw in my limited testing. (But I didn't
> touch the PLI stuff at all!)
>   On the other hand, the Linux operating-system still has a
> few rough corners.  I saw two instances of NFS file corruption
> (on reads from Sun NFS servers), in two weeks.  The Sun systems
> experienced 1 instance of write corruption in 1.5 years, and
> that write-corruption was RCS related (I hope!)
>   Other than the NFS problem, Linux seemed to run as advertised.
> I've used Windows all my life, so I'm used to rebooting for
> almost no reason whatsoever.  I only had to reboot the linux
> box once, and that's probably due to my inexperience.  (I couldn't
> unmount a downed NFS volume, and it was easiest to just reboot.)
>   Still, I'd only recommend Linux to people with good UNIX
> backgrounds, or people who have access to professional admins
> (like in a big company.)



Tue, 10 Feb 2004 06:13:51 GMT  
 Cadence Verilog on Linux experiences

Quote:

> A few weeks ago, I set up an experimental Linux (Pentium3) box (Redhat7)
> with Cadence's LDV32 (for us, that's Verilog-XL and NC-Verilog, and
> Signalscan waves.)  Our other compute servers consist of a variety of
> Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to Blade-1000 2/750.
> Here are my observations :

[snip useful report about linux experiences, thanks]

is simulation the only part of the flow close to support on non-sun non-hp
non-solaris non-irix platforms?
that said, the blade experiences sound promising. i am probably too scared
to phone up sun for a price though :)
i guess the bottom line is, if you were starting from scratch in the
'mainstream' of asic design (guessing hdl flows for  mixed-mode soc-type
design), what software would you use on what platform(s)?  meaning, is
synthesis or physical flow possible on linux? (imagine not).

ed chester



Sun, 07 Mar 2004 01:16:12 GMT  
 Cadence Verilog on Linux experiences

Quote:


> > A few weeks ago, I set up an experimental Linux (Pentium3) box (Redhat7)
> > with Cadence's LDV32 (for us, that's Verilog-XL and NC-Verilog, and
> > Signalscan waves.)  Our other compute servers consist of a variety of
> > Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to Blade-1000 2/750.
> > Here are my observations :

> [snip useful report about linux experiences, thanks]

> is simulation the only part of the flow close to support on non-sun non-hp
> non-solaris non-irix platforms?

No, I use Synopsys Design Compiler under Linux. I've also done some
PrimeTime stuff under Linux.

Petter
--
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://gustad.com



Sun, 07 Mar 2004 03:17:49 GMT  
 Cadence Verilog on Linux experiences
Both Synopsys and Cadence support virtually their entire tool suites on
Linux. I've use nc_verilog, vcs, design compiler.

For FPGAs Synplicity is also supporting Linux.


Quote:


>> > A few weeks ago, I set up an experimental Linux (Pentium3) box
>> > (Redhat7) with Cadence's LDV32 (for us, that's Verilog-XL and
>> > NC-Verilog, and Signalscan waves.)  Our other compute servers consist
>> > of a variety of Sun hardware, from a Ultra2 2/300, Ultra60 2/360, to
>> > Blade-1000 2/750. Here are my observations :

>> [snip useful report about linux experiences, thanks]

>> is simulation the only part of the flow close to support on non-sun
>> non-hp non-solaris non-irix platforms?

> No, I use Synopsys Design Compiler under Linux. I've also done some
> PrimeTime stuff under Linux.

> Petter



Sun, 07 Mar 2004 04:47:23 GMT  
 Cadence Verilog on Linux experiences

Quote:
> Both Synopsys and Cadence support virtually their entire tool suites on
> Linux. I've use nc_verilog, vcs, design compiler.

Synopsys does not have FPGA Compiler II for Linux.

Quote:
> For FPGAs Synplicity is also supporting Linux.

Xilinx Alliance does not yet run under Linux either. I wish it did so
I could run PAR jobs on our Linux clusters.

Petter
--
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://gustad.com



Sun, 07 Mar 2004 05:17:37 GMT  
 Cadence Verilog on Linux experiences
The Xilinx tools run under wine,

http://www.polybus.com/xilinx_on_linux.html


Quote:

>> Both Synopsys and Cadence support virtually their entire tool suites on
>> Linux. I've use nc_verilog, vcs, design compiler.

> Synopsys does not have FPGA Compiler II for Linux.

>> For FPGAs Synplicity is also supporting Linux.

> Xilinx Alliance does not yet run under Linux either. I wish it did so I
> could run PAR jobs on our Linux clusters.

> Petter



Sun, 07 Mar 2004 06:18:53 GMT  
 Cadence Verilog on Linux experiences

Quote:
> The Xilinx tools run under wine,

Personally I think one of the biggest benefit of running in an UNIX
environment is that I can login from home in the evening, check the
status of my job and the system, launch signalscan, and kill and
restart the job if required. I found this very difficult with windows
since there is no "ps" and "kill" command.

For the same reason I think running under wine does not give you very
much. I have tried to login from home into a Windows 2000 server to
start a long PAR job. The day after I saw that the process was still
there, but not using any CPU. I had no idea what it was doing (I guess
if you know more about Windows that I do you might be able to figure
out) and my only option was to kill the process and restart. Xilinx
par does not support multinode runs under Windows either, only on UNIX
(i.e. Solaris or HP-UX). I run par jobs on the suns, but the
price/performance ratio for our Athlon (or Intel) based systems are
soo much higher. A Blade 1000, 900MHz, 1GB RAM costs US$20,000 here in
Norway. I can get 10 Athlon's with better performance for that price.

I think Windows is useless as an EDA platform since a reboot is a part
of the installation procedure. I can have verilog simulations running
for weeks, even months. Then if you want to install e.g. Adobe Acrobat
on the machine, you'll have to reboot as a part of the installation. I
can't see how they even can call it a server OS...

Petter
--
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://gustad.com



Sun, 07 Mar 2004 14:37:33 GMT  
 Cadence Verilog on Linux experiences

Quote:
> Xilinx Alliance does not yet run under Linux either. I wish it did so
> I could run PAR jobs on our Linux clusters.

I hesitate to say "never", but the few contacts I have had are
quite insistant that Xilinx is specifically *not* interested
in supporting Linux.

They only barely support Solaris and HP/UX as it is, and I'm
sure they'd ditch those if they didn't have some institutional
customers that use those platforms.

In a word, you can ask 'em, but I think you'll find it's hopeless.
--
Steve Williams                "The woods are lovely, dark and deep.
steve at icarus.com           But I have promises to keep,
steve at picturel.com         and lines to code before I sleep,
http://www.picturel.com       And lines to code before I sleep."



Mon, 08 Mar 2004 05:10:44 GMT  
 Cadence Verilog on Linux experiences
When you run par under wine it acts just like a Unix app. I use the same
scripts as I do under solaris. I have one client that uses large server
farms of linux machines to do place and routes on hundreds of FPGAs at
once. I agree with you that any form of Windows is useless as a CAE
environment, but windows apps under wine are not running on Windows they
are running on Linux with all that entails, ps works, kill works, you can
look at the intermediate log files with emacs, you can secure shell in
and run remotely.

There is some interest in supporting Linux at the developer level at
Xilinx but upper management hasn't got the message yet. I encourage
everyone to nag their Xilinx sales reps and FAEs about it. Eventually
Xilinx will get the message.


Quote:

>> The Xilinx tools run under wine,

> Personally I think one of the biggest benefit of running in an UNIX
> environment is that I can login from home in the evening, check the
> status of my job and the system, launch signalscan, and kill and restart
> the job if required. I found this very difficult with windows since
> there is no "ps" and "kill" command.

> For the same reason I think running under wine does not give you very
> much. I have tried to login from home into a Windows 2000 server to
> start a long PAR job. The day after I saw that the process was still
> there, but not using any CPU. I had no idea what it was doing (I guess
> if you know more about Windows that I do you might be able to figure
> out) and my only option was to kill the process and restart. Xilinx par
> does not support multinode runs under Windows either, only on UNIX (i.e.
> Solaris or HP-UX). I run par jobs on the suns, but the price/performance
> ratio for our Athlon (or Intel) based systems are soo much higher. A
> Blade 1000, 900MHz, 1GB RAM costs US$20,000 here in Norway. I can get 10
> Athlon's with better performance for that price.

> I think Windows is useless as an EDA platform since a reboot is a part
> of the installation procedure. I can have verilog simulations running
> for weeks, even months. Then if you want to install e.g. Adobe Acrobat
> on the machine, you'll have to reboot as a part of the installation. I
> can't see how they even can call it a server OS...

> Petter



Mon, 08 Mar 2004 11:12:27 GMT  
 Cadence Verilog on Linux experiences

Quote:

> > Xilinx Alliance does not yet run under Linux either. I wish it did so
> > I could run PAR jobs on our Linux clusters.

> I hesitate to say "never", but the few contacts I have had are
> quite insistant that Xilinx is specifically *not* interested
> in supporting Linux.

Hmm. I've heard that it's just a question of time. I've also hear
rumors that developers at Xilinx use Linux as a development platform
since it's more stable than Windows and faster than Solaris (they
probably have some older SUN hardware).

Petter
--
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://gustad.com



Mon, 08 Mar 2004 12:40:31 GMT  
 Cadence Verilog on Linux experiences

Quote:
> When you run par under wine it acts just like a Unix app. I use the same

You are of course right! I must have been thinking of some other
emulator which simply starts a Windows environment. I should know
since I use Corel Draw under Linux (which is a wine application) to
make technical illustrations for my{*filter*}documents.

Thank you for straightening my mind :-) I'll certainly will try to get
Alliance to run under wine.

Quote:
> scripts as I do under solaris. I have one client that uses large server
> farms of linux machines to do place and routes on hundreds of FPGAs at

They aren't running par in multinode mode under wine are they?

Petter
--
________________________________________________________________________
Petter Gustad   8'h2B | (~8'h2B) - Hamlet in Verilog   http://www.*-*-*.com/



Mon, 08 Mar 2004 15:17:44 GMT  
 
 [ 24 post ]  Go to page: [1] [2]

 Relevant Pages 

1. Looking for ASIC layout contractor with Cadence tool experience

2. Experience with Cadence's Synergy Synthesizer?

3. Looking for ASIC layout contractor with Cadence tool experience

4. Cadence Verilog XL LVD 3.3

5. Cadence Verilog - Veritime Analysis help ...

6. looking for Cadence verilog UDPs

7. Cadence Verilog encryption

8. Cadence Openbook Verilog-XL Ref BUG.

9. Cadence's New Verilog Editor

10. Trying to get Verilog cranked up (Cadence point tools)

11. Verilog to Cadence DEF Program

 

 
Powered by phpBB® Forum Software