Re: [dev][surf]

From: markus schnalke <meillo_AT_marmaro.de>
Date: Sat, 10 Apr 2010 10:48:41 +0200

[2010-04-07 18:22] Bjartur Thorlacius <svartman95_AT_gmail.com>
>
> The problem is that surf is both a HTTP-client (a downloader) and a
> HTML-renderer. When you only want to download HTML-files from
> HTTP and render instantly, this isn't a problem.
> But when you only want to use the downloader; things get harder
> and you'll have to use some weird tricks.
>
> If surf could be reduced to something like:
> GET $URI | html2text | more
> like I used myself, but the problem is that we need to enable
> link-following (which is the core feature of a web browser).
> GET $URI | html2markdown | markshow # could use rST
> This though needs reparsing of markdown which doesn't make
> sense on second thought.
> see `getter $IRI` # or see $(getter)
> where getter downloads the resource which the IRI which $IRI
> references references and returns a reference (filename) to the
> downloaded file and the media-type thereof (the referenced file ;).
>
> Preferably one would make a pager which supports links (think `more`
> with numbered links). Hubbub could be used to parse and sanitize the
> HTML and convert it to a simple (to reparse) clean format possibly
> containing some style information. If this format might turn out to be
> (X)HTML (isn't HTML-parsing faster in existing browsers because of
> optimizations?) Hubbub could be linked directly to the pager.
> Making a simple renderer, based on e.g. Dillo or WebCore/WebKit,
> which accepts a HTML-file would might be easier though.
>
> Any volunteers?

The idea is nice, but this approach will hardly work with today's
broken web. The web technology originally allowed such approaches, but
now it is so heavily abused, that web browsers must suck.

meillo
Received on Sat Apr 10 2010 - 08:48:41 UTC

This archive was generated by hypermail 2.2.0 : Sat Apr 10 2010 - 09:12:02 UTC