On Wed, Sep 19, 2012 at 5:06 PM, Andreas Krennmair <ak_AT_synflood.at> wrote:
> * Stephen Paul Weber <singpolyma_AT_singpolyma.net> [2012-09-19 17:00]:
>
>>> Still, forking is never the bottleneck
>>
>>
>> Never? Isn't forking-as-bottleneck most of the reason alternatives to CGI
>> exist?
>
>
> One of the bottlenecks of CGI is that the popular "web scripting" languages
> (i.e. PHP, Perl, Python, Ruby) make it horribly inefficient. Every request
> doesn't only mean a fork, but also completely loading the application's
> source code including all dependencies. I once measured this (because the
> web hosting platform that I work on uses CGI + suexec to separate web
> scripts of different users on the same host), and every single request to a
> WordPress makes the PHP interpreter load more than 1 MB of PHP source.
> MediaWiki is even worse: I measured about 4 MB.
>
> For lightweight web apps, consisting of only a single binary or a single
> script that is loaded quickly, this obviously isn't a problem.
Exactly.
For werc, which does hundreds of forks per request (and that is
written in interpreted languages: rc, awk, sed), I measured that when
using the markdown.pl Perl script for formatting, starting up perl
took considerably longer than all the forks and executions of
rc/awk/sed code.
The real problem is how much popular scripting languages like Php,
Perl, Python, etc, suck.
And I wont mention Ruby because Ruby performance a joke nobody could
ever take seriously:
http://harmful.cat-v.org/software/ruby/
All that said, fork performance for statically linked binaries is
really good on Linux, other platforms and dynamically loaded binaries
are more problematic, but nowher near as much as others make it.
(This reminds me, most such scripting languages are dynamically linked
and load tons of dynamic modules at run time, more stupid overhead)
Uriel
> Regards,
> Andreas
>
Received on Wed Sep 19 2012 - 18:15:43 CEST