Hacker News new | past | comments | ask | show | jobs | submit login

> Yes, but Common Lisp is also "stupid dynamic"!

It's not. Common Lisp was designed to enable Lisp applications to be delivered with reasonable performance, first in 1984, when an expensive computer might have had 1 to 10 Megabytes (!) of memory and a CPU with 8 Mhz / 1 Million instructions per second. You'll then see a bunch of different implementations, sometimes within the same running Lisp and able to use different execution modes in the same program:

* source interpreted Lisp -> a Lisp Interpreter executes the code from traversing the s-expressions of the source code -> this is usually slow to execute, but there are also very convenient debug features available

* compiled Lisp code -> a Lisp compiler (often incremental) compiles Lisp code to faster code: byte code for a VM, C code for a C compiler or machine code for a CPU. -> often this keeps a lot of the dynamic features

* optimized compiled Lisp code -> like above, but the code may contain optimization hints (like type declarations or other annotations) -> the compiler uses this provided information or infers its own to create optimized code.

For "optimized compiled Lisp code" the compiler may remove all or some of dynamic features (like late binding of functions, allowing data of generic types to be passed, runtime type checks, runtime dispatch, runtime overflow detection, removal of debug information, tail call optimization, ...). It may also inline code. The portions where such optimizations are applied span from certain parts of functions to whole programs.

Common Lisp also has normal function calls and generic function calls (CLOS) -> the latter are usually a lot slower and people are experimenting with ways to make it fast (-> by removing dynamism where possible).

So, speed in Common Lisp is not one thing, but a continuum. Typically one would run compiled code, where possible, and run optimized code only where necessary (-> in parts of the code). For example one could run a user interface in unoptimized very dynamic compiled code and certain numeric routines in optimized compiled code.

  CL-USER> (defun foo (a b)
             (declare (optimize (speed 3) (safety 0))
                      (fixnum a b))
             (the fixnum (+ a (the fixnum (* b 42)))))

  CL-USER> (disassemble #'foo)
  ; disassembly for FOO
  ; Size: 28 bytes. Origin: #x70068A0918                     ; FOO
  ; 18:       5C0580D2         MOVZ TMP, #42
  ; 1C:       6B7D1C9B         MUL R1, R1, TMP
  ; 20:       4A010B8B         ADD R0, R0, R1
  ; 24:       FB031AAA         MOV CSP, CFP
  ; 28:       5A7B40A9         LDP CFP, LR, [CFP]
  ; 2C:       BF0300F1         CMP NULL, #0
  ; 30:       C0035FD6         RET
  NIL
As you can see, with optimization instructions and type hints, the code gets compiled to tight machine code (here ARM64). Without those, the compiled code looks very different, much larger, with runtime type checks and generic arithmetic.



Nothing prevents a Python dynamic compiler to follow a similar approach though, specially now that type annotations are part of the language.

And in any case, there are the Smalltalk and SELF JITs as an example of highly dynamic environments, where anything goes.


With declarations which are promises from the programmer to the compiler (I promise this is true, on penalty of undefined behavior), you can fix a lot of "stupid dynamic".

Python could have a declaration which says, "this function/module doesn't participate in anything stupidly dynamic, like access to parent locals". If it calls some code which tries to access parent locals, the behavior is undefined.

That's kind of a bad thing because in Lisp I don't have to declare anything unsafe to a compiler just to have reasonably efficient local variables that can be optimized away and all that.


Type annotations are defined to be completely ignored by the interpreter.

So, as of today, they’re useless for optimization. That could be changed, but hasn’t been so far.


I'm not exactly sure how anything you said supports that CL isn't a dynamic language.

When using SBCL for example, none of CLs dynamic features are restricted from the programmer in any way. So whether it's compiled to native code or not has no bearing at all on how dynamic the language is.

Can you explain to me why a Python compiler couldn't implement optimizations similar to SBCL?

>It's not.

Is Python more powerful than CL in some way that I am not aware of?


I tried to explain that the optimized version of Common Lisp is less dynamic than the non-optimized version. The speed advantage often comes because the compile code is less or not dynamic. Late binding for example makes code slower because of another indirection. An optimizing compiler can remove late binding. The code will be faster, but there might no longer be a runtime lookup of the function anymore.

> When using SBCL for example, none of CLs dynamic features are restricted from the programmer in any way.

Sure, but it will be slower in benchmarks. The excellent benchmark numbers of SBCL is in part a result of being able to cleverly remove dynamic features.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: