Hacker News new | past | comments | ask | show | jobs | submit login
The Wolfram Language: Fast Introduction for Programmers (wolfram.com)
80 points by nkurz on Dec 21, 2015 | hide | past | favorite | 46 comments



Programming in Mathematica is weird. I've done a whole course in machine learning using only mathematica (as opposed to more obvious choices, like matlab or python) and while writing small functions was very easy (and functional programming fun), compounding those functions without using global variables was very hard. I kept using the Module construct where each local variable has to be declared, and it ended up being a mess. I always have a running session of Mathematica running for quick calculation or plotting (easier imo than matlab+nupy / matlab), but I would never consider it for larger projects. But I've never seen a larger codebase so maybe there's some kind of design pattern essential for this language.


Module (and its dynamical scoping cousin Block) can be clunky to use because you have to declare the local variables you wish to use up-front. Other than that it's not 'weird'.

I write a lot of Mathematica code for my job. To make life easier I've built in a macro system that makes day-to-day programming a whole lot more pleasant. It will be available in Mathematica 10.4 in the GeneralUtilities package that ships with Mathematica.

One of the built-in macros is a construct that automatically localizes any variables that occur as the L-value of an assignment:

  Needs["GeneralUtilities`"]
  foo[x_, y_] := Scope[
    z = x * y;
    PrimeQ[z + 1]
  ]
This gets rewritten at definition time to be:

  foo[x_, y_] := Block[{z},
    z = x * y;
    PrimeQ[z + 1]
  ]
It's a small thing, but it adds up, especially for large functions, and indeed its silly this isn't a core feature of the language.

There are a couple other nice features of Scope, too, like the ability to use a single quote ' anywhere in the body of your function to Echo the value there to the current notebook without actually changing the behavior of the code -- essentially a very lightweight form of 'debugging print'. That also makes a huge difference.

And it has some more powerful variants, like "z '= 2 * 3" will echo a line that explicitly says "z = 6", rather than just "6".

Likewise '' means EchoHold, which prints the expression before being evaluated AND afterwards, so you'll see "z = 2 * 3 = 6" in the above case.

Lastly ''' means Tap, which is a higher-order form of EchoHold which instruments a function to echo its input and output (again without changing behavior). So Fold[Plus''', Range[10]] prints out all the intermediate steps in the fold.

There's a variety of features that add a very lightweight syntax for monadic and exception-based error handling, but that's not quite stable enough to advertise yet.

Same goes for debugging: there's some really powerful stack browsing, history capture and replay features, but again not quite stable enough to advertise for this version.


I guess the mental block I have with Wolfram/Mathematica is that there's this feeling that you don't know how to do it until you know how to do it. As opposed to languages where you can learn a smaller number of principles and then use those principles to reason your way to how to do it.

Like, the third slide of the introduction:

Range[20]

Why not just something like List[1..20] ? Maybe List[1..20] works in Wolfram, but then it's weird that you have this Range function that serves no other purpose than to define a Range.

Same with NestList[f, x, 5]. I don't offhand know the way to create its output in something like Scala or Haskell but I imagine it's pretty concise, but meanwhile, in this case... if it's possible to get the effect without using NestList, then why use it? And if it's not possible to get the effect without using NestList, then how would you ever find out that you can use NestList?

It just seems like the actual programming experience would be a lot more about hunting-and-pecking through documentation and google searches, rather than reasoning your way to solution.

I'd love to hear from someone skilled with Wolfram to offer some clarity. Are you just basically thumbing through the documentation all the time to find that one perfect function call?


You are the target audience of this book. The Wolfram Language has gotten huge, making it harder for beginners to find their way around the basics via reference docs.

That said, WL is in fact based on pretty simple primitives. If you are familiar with functional programming, things like NestList are familiar and you'll start learning by finding the analogues of your favorite FP constructs.

WL also has a lot of functions that you might call "optional": those that could be restated as simple combinations of other functions. The rationale there is pretty simple: if there is a well-defined, commonly used chunk of computation, it should be given a name (and design). The fact that a fair amount of work is put into both the name and design of these things gives a higher sense of coherence (and predictability) than you might expect from their number.


It's just like any other language. At first you're constantly looking at the docs to find the right function (perhaps library call is the better analogy here), but eventually you'll know what's important and where to look if you're trying to solve something new.


"The Wolfram Language has about 5000 built-in functions. All have names in which each word starts with a capital letter."

Amazingly, I get the impression they say it as a point of pride, rather than as an embarrassing design decision. Don't get me wrong, having a rich, large standard library is great, but unless you use some kind of hierarchical namespaces, it's impossible to deal with.


You speak a language with three to six thousand "common" words and over one million total words. English speakers utilize no such hierarchical naming scheme, only practice.

Typical Mathematica notebooks (I won't say "programs") use a fair bit less than one hundred of the common functions, plus a handful of ___domain-specific functions from that the greater set of five thousand. At any time, you can enter ?FunctionName to access documentation far more comprehensive than most languages offer.

The UpperCaseForStandardLibrary and lowerCaseForYourCode convention actually makes a lot of sense in the context of Mathematica's primary use cases vs traditional software engineering projects. When I'm just trying to solve some problem in Mathematica, using my computer as a bicycle for the mind, I appreciate not having to putz with import statements or avoid name collisions.


There's no reason to expect a language like English which evolved organically to have an elegant or efficient design.


And thus, given the large amount of effective communication that occurs in English, there's no reason to expect that there is much of a downside in eschewing this particular language design approach that you propose.


> given the large amount of effective communication that occurs in English

I wasn't aware that anyone thought English was an effective language for communicating. It's horrible. Even native English speakers have problems communicating all the time.


Actually, English has a fairly high entropy/syllables ratio. [1][2]

Also, given that we can infer with decent accuracy what people are saying even in noisy environments, I would say it's really not too bad.

You could do better of course, but in a good spoke natural language you want a good mix of density (for efficient communication) and redundancy (for dealing with environmental variables). Optimizing for one comes at the expense of the other, and English strikes a pretty decent balance. [citation needed]

1: Pellegrino, et al. University of Lyons, 2011 <http://www.ddl.ish-lyon.cnrs.fr/fulltext/pellegrino/Pellegri... 2: There has been some valid criticism made to [1].


The issue is understanding, not transcribing.

The existence of politicians (and politics) proves that it's easy to misunderstand.


I only mentioned that there is a large amount of effective communication the occurs in English. Surely this is not a controversial claim.


False. Even native English speakers have occasional problems communicating.


False. Miscommunication occurs in most pure english interactions but only occasionally is significant and even more rarely actually kills someone.

There is excellent research on text/email communication.


> Miscommunication occurs in most pure english interactions

Citation needed.

[Edit: On reflection, this statement is probably true, if you define "miscommunication" to be "the message sent is not exactly the same as the message received". But I think that is an unreasonable definition.

I think it is more reasonable to define "miscommunication" as "the message sent is not substantially the same as the message received". If you claim that, I'd like to see your citations.]


I can't find the study I was thinking of which involved the extreme disparity between the sender's self rating for their ability to convey an intent and the receiver's impression.

Here is a similar abstract: http://pro.sagepub.com/content/56/1/1491.abstract

Note that emotion is also conflated with aspects like priority and whether the response is a criticism or positive acknowledgement which will affect future interactions or even immediate attempts to "repair" non-existent faults.

I personally find it slightly easier to communicate with non-native speakers over email as any bilingual must have more experience handling ambiguous communication.

In working with native speakers in open floor plans, I have seen hilarious results that I would have found all the more hilarious if I were further away from the projectiles and had fully independent finances.

(But I will concede that English is a good language for talking abstractly about subjects that are clearly neutral where it matters little if nuances are misunderstood.)


> But I think that is an unreasonable definition.

I don't think unreasonable in the context of this discussion.

When we use English, having a bit of miscommunication is likely harmless, and probably goes unnoticed most of the time. When we write a computer program, we really want the compiler to understand exactly what we mean.


True, and good point.


And thus given the large amount of effective websites that are programmed in PHP, there's no reason to expect that there is much of a downside in using this particular design.


Well, PHP is certainly efficient for programming websites. It just comes at the expense of everything else.

Yes, bashing PHP is fun, and yes, I do it all the time. PHP may be a joke, but there are a bunch of PHP websites because PHP is a very, very fast way to program websites, even widely used ones.

If you could program a simple website in Idris in 30 minutes starting from no prior knowledge, then a large amount of websites would be programmed in Idris (and the world would be a much nicer place). Since that's not the case, should we consider Idris a bad language with major downsides? Of course not, that's pointless, since it wasn't a language meant for building websites. PHP, however, is. Saying it has major downsides outside of its niche is somewhat irrelevant.


> PHP is a very, very fast way to program websites, even widely used ones

Sure, and none of that has anything to do with its horrible design decisions and inconsistent naming. This ability to whip up web sites quickly is not something that PHP achieves as the cost bad design, it achieves it despite unnecessarily bad design.


Agreed. It makes me think of the amazing Guy Steele talk "Growing a Language."

https://www.youtube.com/watch?v=_ahvzDzKdB0


This is less of a problem when you consider that peripheral functions tend to have verbose, descriptive names. Is GeoProjection really fundamentally worse than Geo.Projection? The upper case convention by itself greatly reduces the chance of collisions with user-defined functions and variables.


Yes it is.

Either all of your geography related function start with "Geo" followed by a capital. In that case you're basically using a namespace while reaping none of the benefits. You can't use the namespace to avoid repeating yourself in some local context, you can't rename it, you can programmatically inspect its content, etc.

Either they do not, in that case you have an inconsistent mess.


Ok. So they basically use a namespace. Now, your objection is that the implementation of the builtin functions repeats Geo too much?

As a user, why should I care?


No, that's the weakest advantage, I've cited others. For instance, being able to rename the namespace is very important if you're using third party libraries which may have conflicting names.


Half the reason for Wolfram Language's large standard library is that you should almost never need a third party library of any kind. If there was a third party library that was particularly useful to typical Mathematica use cases, Wolfram would just put it in to the standard library...


& claim the code as their own


Mathematica has hierarchical namespaces (known as Contexts) with backtick as the separator.


PHP has no namespace, inherits many C-functions and is quite popular.

It's easier to recall a function name than recall the namespace and the function name, isn't it? With modern IDEs this is a none issue anyway. Though from my experience with Mathematica 5-7, the language feels a bit weird. Nevertheless it's quite an achivement that WolframAlpha (the AI search engine) runs on Mathematica code.


So the only implementations are Wolfram and the cloud Wolfram, both of which can go away or change incompatibly at any point, making my software worthless. I fail to see why I should invest a lot of time in this, given that I'd then have the Sword of Damocles hanging over my head in that fashion.


As a computer science PhD, I love Mathematica, the language. Also I love the documentation it has, which makes hacking on it so much fun. However, I hate its notebook interface, which I think is extremely buggy and sluggish. That was the main reason I stopped using it a two years ago.


I actually think it is a fantastic idea, but a mediocre implementation.


I don't really understand the point of the Wolfram Language. It feels like a language with a humongous standard library that tries to cover everything?


There's a "For programming language experts" page[1], which describes the language as:

> The Wolfram Language is first and foremost an evolution of the symbolic language developed for Mathematica over the course of more than 25 years. But it's considerably more than that—adding the knowledge, knowledge representation and natural language abilities of Wolfram|Alpha, as well as a new symbolic deployment mechanism created with the Wolfram Cloud.

This is followed by a "Buzzword Compliance" section, which lives up to its name, but describes some of the features of the language

[1] http://www.wolfram.com/language/for-experts/


It's purpose is much the same as Maple or R a language designed with mathematics and stats. The standard library is just one of the peculiarities of how Wolfram decided to design his language. Instead of using libraries like R it's all built in. Beneath that it's not so terribly odd being a mostly functional language with a standard library that covers maths requirements.


Beyond surface similarities like an emphasis on data transformation over side-effects, I wouldn't call Mathematica a functional language. It's much better classified as a term rewriting language: https://en.wikipedia.org/wiki/Category:Term-rewriting_progra...


Then I guess you don't call Lisp a functional language either?


Lisp's evaluation strategy is based on a very particular category of rewriting systems known as the lambda calculus. Mathematica's is an abstract rewrite system with a broad set of evaluation strategies that are not easily represented in the lambda calculus (without first implementing a term rewriting system).


The syntactical building blocks might be similar but the execution model differs. While you could obviously achieve any sort of term evaluation with a metainterpreter, the TRS in Mathematica offers a rather smart rewriting system.

It can easily express things like associative and commutative operators (esp. useful if you have a pattern that matches on a use of that operator), and alternate evaluation orders (outside-in, mixed, lazy, held, &c). These are hard to trivially implement w/o very different kind of interpreter.

In this sense, most Lisp implementations pick a static evaluation strategy and leave implementing alternatives to a metainterpreter, which isn't exactly what's happening in Mathematica.


I've only used it briefly, but Mathematica: A Problem-Centered Approach (http://www.amazon.com/gp/product/1849962502) is a pretty good resource for learning practical (mathematical) Mathematica programming.


Can you use this language outside of Mathematica, or without purchasing Mathematica?


The FAQ[1] has this:

> How is it licensed?

> The Wolfram Language has multiple licensing models depending on usage scenario. It is available free for certain casual use in the cloud, in CDF Player, and on systems such as Raspberry Pi. It is available through site licenses at educational institutions. It is also available in a variety of subscription and paid-up product offerings. The Wolfram Language is also licensed for OEM use, embedded in hardware or software systems.

The "casual use in the cloud" may be referring to this: http://programming.wolframcloud.com/ (requires sign-up, haven't tried it), and I'm not sure how you'd go about grabbing the Raspberry Pi edition.

[1] http://www.wolfram.com/language/faq/


> I'm not sure how you'd go about grabbing the Raspberry Pi edition.

Easiest way is probably to start by buying a RPi.


Yes. The Wolfram Cloud:

https://develop.open.wolframcloud.com/app/

The Wolfram Language also ships with Raspberrian for the RaspberryPi.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: