On Fri, Jul 26, 2013 at 01:17:54AM +0000, Thorsten Glaser wrote:
> Calvin Morrison dixit:
>
> >I was sick of ls | wc -l being so damned slow on large directories, so
>
> What, besides the printing and sorting, is the slow part anyway?
> Is it the VFS API or just the filesystem code?
>
> In the latter case… could workarounds exist? Someone asked this…
> http://fenski.pl/2013/07/looking-for-a-specific-fuse-based-filesystem/
> … on Planet Debian this night.
Summarized:
Their 100+ Perl and bash scripts are slow because they're opening files
in a humongous directory. They can't subdivide the directory because
they're afraid that they will break the scripts when modifying them.
I just read something about using LD_PRELOAD for this. Write a library
that implements open(2), munging the file path and then calling the
"real" open(2). Then you just set LD_PRELOAD in the environment of the
scripts and Bob's your uncle.
Don't shoot me, I have no idea whether that's a good idea or not!
Paul.
--
Paul Hoffman <nkuitse_AT_nkuitse.com>
Received on Mon Jul 29 2013 - 06:39:39 CEST