Re: [dev] paste_AT_

From: Matthew of Boswell <mordervomubel+suckless_AT_lockmail.us>
Date: Thu, 5 Nov 2015 15:24:40 -0500

My response is a bit long, but I think the idea deserves some good
discussion.

On Wed, 04 Nov 2015 22:28:12 +0100
Christoph Lohmann <20h_AT_r-36.net> wrote:

> > I don't think that I'd want pastebin to email me with everyone's
> > paste; my hard drive would fill up so fast I'd have to quit email.
>
> Not really. Are you using floppy discs for your mailbox storage?

LOL! Nice comeback; this makes my day =D.

In all likelihood, this wouldn't take a lot of space, especially if
compression is used (works well on plain text). The question is more
about scalability - what if this became really popular? I think this
type of problem still hits bitcoin. iirc, bitcoin requires that each
peer download the entire block chain, which grows with time and is
never discarded. Perhaps if it did become that popular, simple
client-side filtering would prevent an overload.

FWIW, I'm using a 16GB SD card for my email server on a raspberry pi.

> Two different views on how to communicate clash here:
>
> 1.) The web view is to have some URI and it's always available.
> 2.) The mail view of having your private mailbox you take care of.

I think you may be on to something. Thanks for the write-up.
(wanted to say this before I "play devil's advocate")

> The idea of wanting a connection to a central database is what makes
> surveillance effective and in the end will reduce your freedom to noth‐
> ing. So keeping to a more »data packet« approach of spreading informa‐
> tion is something I see as the suckless way of distributing data.

I like the anti-surveillance angle. Question: what's to stop google
from subscribing to the patch mailing list and archiving everything
like they do with non-"alt.bin" newsgroups? Here's our current
conversation, for example:
https://groups.google.com/forum/#!topic/wmii/rNSDn-8PFtU

I believe it was taken from gmane.org's usenet version of the mailing
list. Granted, we could set paste_AT_ to not be on usenet, and tell google
"don't crawl this", but what prevents them from doing it if they wanted to?

> This discussion should lead to a different kind of thinking about how
> suckless services actually can be approached without falling into the
> web trap of suck. Thanks for all the responses.

Thanks for the discussion =)

> > 4. Interested reader(s) open the link in a web browser or gist reader.
>
> This involves some JSON API and using the web.

JSON is not necessarily bad. It would be a lot less overhead than the
onslaught of email headers in a simple header, for example. As hiro
demonstrated, pre-HTTP/1.1 is very minimal. You can pretty much get
away with not having any headers at all. Yes, his example really
works. When you don't specify HTTP version, many web servers assume
HTTP/0.9 or HTTP/1.0 (can't remember which one) and leave out all
headers. Try it on a random website =).

> ... and github has declared bankruptcy and doesn't serve the content anymore.

Yep, I've hit this type of issue before.

Now, a few general comments...

A few things come to mind when I read through your descriptions:
usenet, freenet, 9P, and the various distributed filesystems. This also
seems to match up with IPFS, which was mentioned earlier.

Actually, I think this is pretty much how usenet used to be. People
sent patches and distributed open source programs over usenet. There
are several usenet servers which distribute the content, and people
download the articles and keep/search through them as long as they
like. Articles are uniquely identified, and everything pretty much
works like a more distributed mailing list without a spam filter. There
are actually very knowledgeable people and productive conversations in
the programming language newsgroups of today. However, usenet's full of
spam, binaries, and other junk, just like the www.

freenet is a distributed mass of content with the additional benefit of
anonymity. Content is saved in multiple nodes (freenet clients' hard
drives/etc.), and the least accessed content drops off the net after a
time when space is needed. This, also, is full of junk.

Question: are we trying to re-invent usenet, freenet, or IPFS?

I see a few couple solid concepts here. Decentralization is good, and I
can see many ways of doing it. HTTP, atom/rss, mail, FTP, or even
custom protocols over TCP or UDP would work. This could even use
multiple protocols depending on preference (but that could be seen as
feature bloat).

Personally, I don't like email. I think it's overused and abused.
Nearly every online site seems to require an email address to sign up,
and most only use it to send useless notifications, reminders to come
back, and marketing. The sites that don't use it for this junk usually
just want to verify that you're not a spammer. It's gotten to the point
where I make a new "+" email address for everything, just so I can
throw the address away when I get spammed on it. I go with atom/rss or
usenet or anything else available whenever possible, and I only
intentionally use email to talk to humans - directly or through a
mailing list. Also, I get hundreds of (mostly irrelevant) emails at my
job that I have to read in the [IMO] most bloated email client ever
written: Lotus Notes.[/rant]

I see one advantage to email: it is a push type of message. Rather than
query the paste_AT_ server or maintain an idle connection, messages would
arrive as they're sent.

But we do have to maintain a POP3 or IMAP connection to our email
servers, so we'd still be using a pull type of connection... and all
these email headers are like XMPP/jabber overhead.

I generally don't like using email for things it wasn't intended for,
so I'll always disagree on mail being the delivery/archive/search method
of the patch list. But I do understand that you (and others) desire it,
so no need to argue.

The main question: is this better than putting an email wrapper
on a usenet/freenet/9P/HTTP/FTP/IPFS service?

I kind of like where FRIGN is going with this: make our own paste
service that doesn't delete things. We could write some scripts for
client-side caching and storage, checking for new pastes, etc. You
could even write a mailer front-end ;)

That said, I tend to use gist/pastebin etc. only for temporary things
that I *want* to get deleted. I use git for more important things. But
what about those one-off scripts that don't really /need/ their own git
repository? Enter the "forever paste" service =)

-- 
Matt Boswell
Received on Thu Nov 05 2015 - 21:24:40 CET

This archive was generated by hypermail 2.3.0 : Thu Nov 05 2015 - 21:36:09 CET