Re: [dev] [surf] Webkit2 with proxy server

From: Teodoro Santoni <asbrasbra_AT_gmail.com>
Date: Sat, 24 Dec 2016 01:17:51 +0000

Hi,

2016-12-17 17:51 GMT, Sylvain BERTRAND <sylvain.bertrand_AT_gmail.com>:
> The real pb are web coders: many of the web sites could have a noscript
> portal
> without losing significant functionalities. The web sites which _must_ have
> a
> rich GUI (hence a dynamic browser, for instance soundcloud) should provide
> an
> simple web API (without that pile of cra* which is OAuth).

Web sucks by design, although some web sites are getting way easier
to scrap.
In fact you CAN build a website modeled on a good rest/http API (a good
api is when you can fetch, parse and work with its data easily), but if
an application is designed browser-first like often happens to please a boss
and get paid fast, said application is doomed to be shit, because EVEN if it's
built with just html, css, javascript and basic cgi scripts, the flaws in the
html-css-js mechanism will cripple the interface.
Obfuscation of the data used to draw the web application is also often done
on purpose, to protect the website's revenue from adblockers.
I dispute the idea that web services like Soundcloud would need
an interface like the one Soundcloud has right now, unless it needs
to have a player-omnibar-comment interface.
But my problem with webapps is they put piles and piles of js
one after another to deliver ads or tracker scripts. I'd be very ok with ads
inserted somewhere in a youtube video if I could fetch it without using a
megabyte-sized library like youtube-dl -- and thanks to a hook it's the best
way to watch it, using ten times less memory than allocated by a web browser
for the video player, ads, comments and whatnot.
It shouldn't be like that, www is mostly about delivery of fucking text,
so it should be braindead to hack your cli or gui while not stealing anything.

But it's not that grim like I painted it, the only two real problems are
* with vital-tier websites like online banking impeding to avoid to
be burdened by the choice of either the single preferred brand and model of
web browser or the mobile app on a smartphone, avoiding the PC entirely;
* with static document archives which are switching to https, which is good
when you're reading a blog of whatever, but is bad when you have to
browse your township's website after a major catastrophe, for example.

Anything else can be solved by finding your way into scraping the
website and building a proxy that sends you a very simplified version
of it at your w3m, links, lynx, dillo, mosaic or shell script.
It isn't easier and should be packed into a standard distribution,
but it isn't all that much of an enigma.
Received on Sat Dec 24 2016 - 02:17:51 CET

This archive was generated by hypermail 2.3.0 : Sat Dec 24 2016 - 02:24:12 CET