Re: those bits can have 2 values for a reason

[prev] [thread] [next] [lurker] [Date index for 2007/02/07]

From: Robert Rothenberg
Subject: Re: those bits can have 2 values for a reason
Date: 19:08 on 07 Feb 2007
On 07/02/07 17:52 Yossi Kreinin wrote:

> ncsim allows you to open at most 32 files at a time. Have you ever heard
> of a more retarded fixed size limit?

DOS used to have something like that. But you could change an environment
variable to increase that.

> I have to point out that 32 bits doesn't mean 32 values, because, you
> know, you can play with those bits, have some of them 1 and some 0, and
> that gives you a lot of options. I bet someone at Cadence knows that.
> 
> If they convert fds to bitmasks because there is a function
> $print_to_many_files receiving a mask, then whoever invented this
> function is a moron.

I know next to nothing about ncsim, but it seems a bit of a stretch to
assume that the 32 file limit is related to 32 bit word sizes.

Just for fun, let's suppose that is why there is a limit: is that really so
moronic?  Suppose you are passing around a data structure with a list of
file descriptors to output to:

* One option  is to pass around an arbitrary list of fds.  You'll have to
  copy a lot of bytes, and then iterate through the list to get each file
  handle you have to output to.

* Another is to pass around a pointer to such a list, but you may still need
  to dynamically generate this list, and you'll have to dereference the
  pointer and then iterate through the list.

* Since most users won't need more than 32 files at a time, you can just
  pass a word where each bit represents a file handle.  For many types of
  architectures it is much faster to shift bits in a register (maybe with
  some specialised machine instructions to look up from a table)
  than to walk an array.

I would hazard a guess that if one is potentially logging every step, then
printing to files is a bottleneck.  So weird tweaks to shave several cycles
 can have a huge impact on performance.

Which is better:

* An optimisation which gives a 20% performance boost that imposts a limit
  on the software, but one which will not affect 95% of the users (and
  which most of the other 5% can work around)?

* No limits to what the software does (to make life easier for the 10%),
  but it runs slower.  Perhaps slower than a competing product.

Given the trade, I wouldn't call the person who wrote such a function a moron.


Generated at 06:02 on 21 Feb 2007 by mariachi 0.52