On Thu, 21 Jan 2016, Riccardo (Jack) Lucchetti wrote:
On Thu, 21 Jan 2016, Sven Schreiber wrote:
> Am 21.01.2016 um 21:02 schrieb Riccardo (Jack) Lucchetti:
>
>>
>>
>> Oh, it's very easy. All these super-fast languages use some form of JIT.
>> The big difference between Matlab and Octave, for example, comes from
>> Octave being a classic interpreted language, while Matlab switched to a
>> microcode+VM architecture some time ago. Julia just happens to be build
>> on top of LLVM, which is a notoriously fast VM.
>
> Ok, but why then do many (?) examples here show the same speed between
> hansl (interpreted) and Julia (JIT-compiled)? Are the examples so tiny
> that the JIT overhead outweighs the other speed gains, or what's going on?
It's hard to say in general. However, the heaviest penalty is when you do
"lots of small things": for example, big nested loops, lots of recursion,
lots of user-function calls, lots of conditionals, stuff like that.
>> In principle, Hansl could display similar performance, if we had the
>> resources to rewrite the whole interpreter code as a front end to LLVM.
>> Believe me, it's a HUGE job.
>
> I believe you, and I am continuing to admire the extent to which gretl
> is competitive or more in terms of speed. At the end of the day,
> however, the underlying reason doesn't matter for the customers. Perhaps
> it will become necessary, for example, to allow some foreign language
> also in gretl function packages, when that enables the package author to
> work around a serious speed bottleneck. (The value added of the foreign
> language of course would have to be demonstrated in the concrete case.)
Hmmm. I'm not convinced, but I'm open to discuss the possibility in the
longer run. However, my first best would still be finding the resources
(time, manpower and skills) necessary to re-implement hansl as a JIT
language. My guess is that someone with a PhD in CompSci, familiar with
compiler design would do this comfortably in a year or so.
That's the ideal solution, yes. In the meantime I think we can
probably find a number of paths of incremental improvement. I have
some ideas in mind and will report back if anything useful emerges.
Somewhat relevant, I'm putting a script to run the julia performance
tests at
http://users.wfu.edu/cottrell/tmp/juliaperf.inp . The
script requires current git to run right -- this has been a useful
exercise in that it has led to some bug-fixes. Here are my timings
on a quad-core i7 desktop:
gretl git as of 2016-01-21:
gretl,fib,193.29982600
gretl,parse_int,16.48400700
gretl,mandel,56.31419300
gretl,quicksort,324.92457000
gretl,pi_sum,1567.74376500
gretl,rand_mat_stat,29.64716400
gretl,rand_mat_mul,50.59598300
gretl,printfd,204.27190900
R 3.2.3 with "compiler" library
r,fib,14.00000000
r,parse_int,5.00000000
r,mandel,11.00000000
r,quicksort,75.00000000
r,pi_sum,248.00000000
r,rand_mat_stat,100.00000000
r,rand_mat_mul,707.00000000
r,printfd,553.00000000
octave 4.0.0:
octave,fib,313.99011612
octave,parse_int,291.23711586
octave,mandel,135.11800766
octave,quicksort,783.64205360
octave,pi_sum,10908.32090378
octave,rand_mat_stat,248.28195572
octave,rand_mat_mul,78.96304131
octave,printfd,1678.76791954
So we're a good deal faster than octave, but we lag behind R with
its "cmpfun()" JIT mechanism -- except on the matrix operations
rand_mat_stat and rand_mat_mul where we're a good way ahead, amd
also on printf.
Allin