Re: [dev] Interesting post about X11

From: David Tweed <david.tweed_AT_gmail.com>
Date: Thu, 17 Jun 2010 03:55:44 +0100

On Thu, Jun 17, 2010 at 3:32 AM, Kurt H Maier <karmaflux_AT_gmail.com> wrote:
> On Thu, Jun 17, 2010 at 2:27 AM, David Tweed <david.tweed_AT_gmail.com> wrote:
>> "obviously safe machine code"
>
> hahahahahah

Would you care to elaborate on this? The compilation problem is
asymmetric: there's going to be lots of code sequences that are in
fact innocuous which the verifier can't show to be innocuous, but I
don't see any reason why it's not possible to have a compiler that can
compile some code that it can show to be innocuous into safe machine
code. (Ie, the compiler may be refuse to compile a given piece of
code, but if it does compile it's as safe as running interpreted code
-- ie, up to the level of undocumented chip errata could be exploited
in either case.) I'm genuinely interested if there is a flaw in this
reasoning, because I spend a lot of time writing numerical SIMD code
that you can't access from languages where the semantics means values
need to be "boxed". All I'm interested in are machine instructions for
SIMD plus enough scalar operations and known address conditional jumps
to implement marching through arrays of data. The compiler can refuse
to compile any code containing any of the other instructions in the
instruction set and I won't care. (Clearly I don't even need Turing
completeness in the sections of the application that are compiled to
native instructions.) Are you saying the NaCl-style verification
approach cannot work in such a case?

-- 
cheers, dave tweed__________________________
computer vision reasearcher: david.tweed_AT_gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot
Received on Thu Jun 17 2010 - 02:55:44 UTC

This archive was generated by hypermail 2.2.0 : Thu Jun 17 2010 - 03:00:03 UTC