On Sun, Aug 22, 2010 at 8:20 PM, Joseph Xu <josephzxu_AT_gmail.com> wrote:
> On 8/22/2010 12:47 PM, David J Patrick wrote:
>>
>> On 10-08-22 12:37 PM, Alexander Teinum wrote:
>>>
>>> What doesn’t work well for me, is that I cannot easily extend
>>> Markdown. The design that I propose is simpler and more strict. All
>>> tags work the same way. The input is close to a data structure, and it
>>> doesn’t need complex parsing. The drawback is that tables and lists
>>> need more characters:
>>
>> pandoc extends markdown and has some table support,
>> djp
>>
>
> The problem with all these Markdown extensions is that they come as packages
> with monolithic parsers, so if you like the pandoc syntax for one kind of
> entity but the PHP markdown extensions syntax for another, you're screwed.
> This is a problem with latex as well, all syntax for complex structures like
> tables must derive from the base tex syntax. Hence the source code for
> tables looks ridiculous in latex. The troff idea of multiple passes with
> different parsers translating each type of entity into a base representation
> solves this problem nicely and should be emulated more. I wonder why troff
> never really caught on in the academic community.
There's the obvious point that, being a mathematician, Knuth really
understands how mathematicians think and both the TeX basic
mathematics notation and the quality look noticeably better than eqn.
There are two slightly more minor reasons:
1. Knuth went to incredible pains to ensure that the output file from
a given .tex file is absolutely identical regardless of the machine
the program was run on (and has shouted loudly at anybody making
changes which break this). Given that academic papers can remain
relevant for at least 50 years, and that citations in other papers are
occasionally very specific (the 1st paragraph on page 4) that may have
been an important point.
2. Knuth really, really, Really, no REALLY, cares about his programs
not misbehaving in the case of user errors (unlike some luminaries in
the computing field). The work he did basically trying incredibly
convoluted language abuse to "break" TeX means that it's almost
unencounterably rare to have it silently produce corrupt output files
or segfault. Admittedly part of this may be from him primarily working
in an era when submitting jobs for batch-mode processing was one
common way of doing things, so that you want to have useful logs at
the end rather than relying on the user interactively spotting
something screwy is happening. Again, back in 1982 this attitude may
have been relatively important. (I've got to admit it's probably
reading his amaxing paper on the TRIP test for TeX that probably fired
up my desire not to silently output corrupt files, or fail
mysteriously when given corrupted/erroneous input, and above all to
consider how you can diagnose errors in your program at just as
important as considering normal processing during the design stage.)
Of course, it's possible that the fact TeX took off whilst ROFF
descendants never did is purely historical accident.
-- cheers, dave tweed__________________________ computer vision reasearcher: david.tweed_AT_gmail.com "while having code so boring anyone can maintain it, use Python." -- attempted insult seen on slashdotReceived on Sun Aug 22 2010 - 22:04:54 CEST
This archive was generated by hypermail 2.2.0 : Sun Aug 22 2010 - 22:12:01 CEST