On Wed, 19 May 2010 13:43:06 +0300
Elmo Todurov <todurov_AT_gmail.com> wrote:
> On 05/19/2010 01:32 PM, pancake wrote:
> > i would probably even improve the heap usage of this .c, but it's
> > better solution than the shellscript one IMHO.
>
> How?
>
> Elmo Todurov
>
1)
In unix there's a MAXPATH variable.. in fact GNU does not have this limit,
but in unix is 4096 and in plan9 256 (afaik)
the thing is that keeping a clean system you shouldn't have paths that big.
So you can define a single buffer of this size to strcpy/memcpy/strcat the
paths you need to construct the executable paths you need.
this will reduce the heap usage a lot.
the memory accesses will be hardly reduced because you can keep the basedir
on the buffer on all iterations for each directory. only changing filename.
2)
syntax is not following the suckless style, I would prefer to have all source
files in suckless following the same rules.
3)
There'r many places where you don't check for result of malloc/getenv is null.
4)
many variables can be removed (like copy_path in get_PATH()
5)
I would add 'die()' instead of perror+exit
6)
use sizeof(buf) instead of hardcoded size (makes the code safer to changes)
7)
I would change --force flag check to be just '-f'
8)
why do you check for root?
9)
as all filepaths are of similar size you can just allocate blocks of this size
and index by using a multiplier which results faster than having many chunks
(with some tests i did in 'alt' (hg.youterm.com/alt) this kind of optimizations
result in 3-5x faster execution.
10)
put ".dmenu_cache" path as define on top of C file. so you can change it easily.
Received on Wed May 19 2010 - 11:24:46 UTC
This archive was generated by hypermail 2.2.0 : Wed May 19 2010 - 11:36:02 UTC