Thanks right, technically Go supports a structural type system* which is a similar to, but different than duck typing. But the Go community often refers to Go as being duck-typed due as that term is more familiar to people coming from languages like Python.
"often" is stretching things a bit. The reason this terminology popped up is that it was in the official documentation, but it was (or is scheduled to be) taken out.
anyway, you're right, it's not duck typing. Duck typing happens at runtime. With Go, every variable has exactly one type at compile time. Sometimes that type is an interface type. With duck typing, the checking happens at runtime. There's no runtime type checking with Go's interface system.
The only requirement to duck typing is "an object's methods and properties determine the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface".
This is exactly how Go's interfaces work. You define the methods you require an argument to have, and then anything that has those methods can be used as that argument, without the original type having any knowledge of the interface that was designed. The only difference between Go and Python in this regard is that Python doesn't actually define what a duck is for a specific method, it is implicitly defined by the methods that are called on it. Go explicitly defines what a duck is, which makes it a hell of a lot easier to know if the object you're passing in qualifies as a duck without reading every line in the damn function.
Yes, go checks that an object is a duck at compile time, that doesn't have anything to do with duck typing.
It is true that duck and structural typing have only a subtle difference. But it is also true that duck typing has to be at run time essentially by definition.
Structural typing looks for structural isomorphsims - it is total. Duck typing looks at run time compatibility, so long as the runtime accessed portion is the correct signature it passes.
So structural typing is about isomorphisms and duck typing is about checking the correct boxes before entry. Both have the attribute of looking at structure but you cannot expect them to cluster types in the same manner.
In ML* terms I like to think of structural typing making hard clusters and Duck typing allowing for soft clusters.
Sure if you define the difference away anything can be the same.
But you cannot expect them to always behave the same way. That is, { f_d(t) | t ∈ T} <> {f_s(t) : t ∈ T} where f : T -> T set takes a type and returns the set of types in T equal to it and f_s , f_d check for equivalence via structural and ducks respectively. You can expect f_s to also define a partition over T but not necessarily the same for f_d. f_d is vaguely defined there but if you pretend it acts like it simulates how a runtime check would occur, |f_d| will tend to be >= than |f_s|
An example of how structural typing is not just compile time duck typing is because such a thing exists (in F#) and is not the same as structural typing.
I'm beating a dead horse here, but is there a difference between "compile-time duck-typing" and "structural typing" besides just type inference? It seems to me that so long as the duck-typing language really does unify types, they are the same thing, except that with structural typing you have to write down the type somewhere and it can be a more restrictive type than the actual runtime calls necessitate.
yes, with structural typing, you have to be able to do everything a duck can do, not just the duckish things you're interested in locally. In duck typing, if all you're interested in is walk_like_a_duck, anything with walk_like_a_duck is acceptable, even something that doesn't talk_like_a_duck, because in reality, you never defined the "duck" type to begin with; the actual "duck" type doesn't even exist. In structural typing, "duck" would be a concrete type. "duck typing" is a pretty shitty name for it, to be honest, because of that saying "if it walks like a duck and talks like a duck, it's a duck", but the reality is it's more like "if I need something that walks_like_a_duck and this walks_like_a_duck then it's good enough for me".
let's put it in another perspective: if you have a function that takes disparate types, then the thing that's passed in that function may be of arbitrary type. It's unknown at compile time how much memory will be allocated for the instantiation of that function in order to accept the parameter, because the type that's being passed it is unknown. In structural typing, like in Go, you know at compile time the type of the thing being passed in, and therefore, the amount of resources that will be required upon calling.
Or in another sense, an array of an interface value in Go has a well-defined memory layout, whereas an array of disparate types in a dynamic language does not have a well-defined memory layout at compile time.
internally, an interface value in Go is a two-tuple; it's an actual additional structure; it boxes the structure we're interested in. With duck typing, there is no box.
so yes, it's actually very different. The difference is more meaningful than people generally let on.
Isn't duck typing what would let it work without creating that new interface? Do you say it's like duck typing because you don't have to mention anywhere that your two previous interfaces satisfy your new interface?
The interface just defines the methods an object must have to be passed into this function. No types have to "implement" the interface in the traditional sense. Anything that happens to have the same methods as are on the interface may be passed into the method that requires that interface.
Basically, the interface just defines at compile time what Python defines at runtime... except it's a hell of a lot easier to see what methods are required for a Go interface, than it is to see what methods are required for a python function (you basically have to read the entire function and any function that function passes the argument to).
My understanding: In duck typing only the part of the structure/method set accessed is checked at run-time for compatibility, in structural typing everything is checked at compile time (even if those checks follow loose duck-typing rules). To the programer they appear to be the same most of the time.
I'm not the least bit a Go programmer, but it sounds pretty different from a programmer's perspective to me. With duck typing your methods end up with conditional logic based around respond_to calls. With Go it seems like it's declarative. Whether you like that or not, you have to admit it's pretty different for the developer.
You never have to declare what fulfills an interface in Go. The fulfillment is implicit. If you define an interface with the Quack() method, anything with a Quack() method on it can be passed into a function that takes that interface, even if that object has never heard of the interface or the function. You never declare that a type "implements" an interface in the way you do for languages like C++ or Java.
Can you clarify why it can't be duck typing if it's static? For instance, in F# you can declare a function that can take any type that provides a specific method name and signature. You can pass any object to such a function as long as it has that method - there's no other similarities required in the types you pass to it. How is that not "duck typing"?
There is little difference between "structural typing" in go, where function signatures reference named and separately defined interfaces, and "static duck-typing" in F#, where function signatures using constraints effectively define an anonymous interface right there in the signature.
In either case, type errors are detected at compile time. Some people prefer to use "structural typing" for both (maybe using "implicit structural typing" to describe what can be done in F#), and to use "duck typing" to describe languages without type-checking, where type errors only happen at run time.
I think you could actually do this in C++, but not with runtime polymorphism. I think you should be able to define extractBody and extractTextBody as template functions. If the signatures (return type, name and parameters) of the functions of the interface are identical - and it looks like they are - then this would work as a C++ template function. Behind the scenes, the compiler would generate two different kinds of functions for you, one for each of the types you called it with.
(Please note I'm not saying this would be better, just pointing out that it's possible.)
Yes, it's possible to use C++ templates to accomplish this. But you'd still miss other features because the types of Header and MIMEHeader are not actually unified. Go would let you statically type-check at compile time, but also understand those types are the same (as "emailHeader") at runtime, so you could do this:
x := ... // Header, MIMEHeader, or whatever you want
v, ok := x.(emailHeader)
if !ok {
fmt.Println("It's not compatible with Get(string) -> string")
}
For runtime polymorphism you'd have to define a wrapper of some sort, which in this specific case would be completely trivial since it's only one method. In more complex cases it gets pretty awful compared to the Go version, but it's not impossible (although AFAIK if the methods you care about have different names in the different implementations you'll need a wrapper even in Go).
Great article. As a side note, you might want to consider reformatting the blog post using syntax highlighting to make the code more readable and obvious. Thanks for posting though.
Everyone uses syntax highlighting, for everything. for everything
I can't honestly believe the whole go community just decided to by like, oh gee, well Rob doesn't like it, I gues we'll just ignore what everyone else is doing and do our own thing.
For what it's worth, I use syntax highlighting in my go code, and so does http://tour.golang.org/
And everyone uses it for a reason: color is one of the primary discriminators of human vision, it carries information efficiently and does it instantly, as clearly laid out in the Wikipedia article about visual search (pop-out effect) : http://en.wikipedia.org/wiki/Visual_search
It seems strange that Rob Pike would call it "infantile". I don't think it's very modern (nor mature) to say that either, since it completely ignores years of research done on the subject by scientists. But then again, to each his/her own.
Colour can help carry information. It is a lot easier to identify strings if you use colours rather than just rely on seeing opening and beginning quotes. I know that I find it very useful - and even more so my choice of colours for comments making them blend more in the background and making it easier to forget.
I would liken it to the use of boldface letters to represent matrices (or vectors) in some math textbook: without this convention, you have to rely a lot more on memory to know what each symbol represents. Mathematicians and physicists everywhere have found this to be useful.
When you have a huge majority of programmers using syntax highlighting, you have to ask yourself if there is not an instrinsic value to its use and, if so, perhaps try to use it to communicate with the larger community, rather than adopting a cliquish practice.
My suspicion / theory is that most programmers turn on syntax highlighting for the same reason that they have desktop wallpapers: to make their lifes more colorful.
Good language-aware syntax highlighters do a lot more than highlight keywords and other syntactic things, and can make code more readable in a nontrivial way. For example, Eclipse's Java editor will color things differently based on the scope you pull it from, which lets you distinguish local variables, non-static fields, static fields, and inherited fields. It does a similar thing for method invocations, including a strikethrough for methods marked as deprecated. This saves having to manually go to a bunch of definitions in order to get the proper context when reading code you're not familiar with.
When your externally mandated build process is really slow, you appreciate things like easily noticing an unclosed quote, or a missing */ that was supposed to close a comment.
That doesn't make failed builds any less frustrating. Heck, in Python, you don't even have builds, but getting a SyntaxError on import is still frustrating.
To be honest, I often feel as if I was in a brothel on some blogs with CoffeeRubyNodeScript highlighting. (However, the choice of colors on this one isn't too good either)
"I would liken it to the use of boldface letters to represent matrices (or vectors) in some math textbook: without this convention, you have to rely a lot more on memory to know what each symbol represents."
This is an excellent comparison, as programming languages probably have more in common with mathematical notation than natural language prose.
I've been working in C/C++ for over 20 years now. I started playing with Go about two months ago, burnt through the tutorial in 2 afternoons, and was hooked. It's very easy coming from a C background, it has constructs I'm familiar with, but also picks out the best parts of Python-ish SmallTalk-ish languages, while still being compiled.
I'm looking forward to when I can write Android apps in it, instead of mucking around with Java.
Exactly... the day Go compiles for mobile devices is the day I'll dig into mobile development.
But the thing with mobile OSes is, you need a sane binding to the native GUI lib.
However, it would be awesome to circumvent this and just be able to do what I can already do with GLFW and OpenGL across 3 different desktop OSes: open a blank window that's just a canvas for GPU (GL) commands. Seeing how OpenGL ES is on all smart phones, having an equivalent "mobile GLFW" library and GLES binding would be awesome and good enough to roll one's own snappy GUIs.
Oh, all that only once Go compiles for said mobile OS / HW in the first place, which is probably still a few years off...
I went into the article thinking it was going to be about some way of renaming functions or something. Probably just my limited internal definition of the word refactor. After reading I would say Go has a cool built in way of using the adapter pattern. More interesting than renaming functions.
Actually, Go makes this somewhat easy too. Check this: http://blog.golang.org/2013/01/go-fmt-your-code.html It's possible to do context aware source code substitutions. Works great. (Not that it's something new or groundbreaking though, just relevant to your expectation)
Folks, recursive data-strcutures are CS101. Processing them is also a trivial, template-filling exercise. First you specify the (recursive) datatype, consisting of the simple "leaf" elements, along with the complex forms. Then you write your program consisting of a series of mutually recursive functions.
What matters is that once you specify the data type, the code that processes it follows immediately from the grammar.
I think you're slightly missing the point. It's not the use of recursive functions or data structures that is interesting. It's that, in Go, you can add a new type which covers externally defined types, and those types are now instances of the new type automatically. In other languages, to unify two types like this with a common interface, you would probably have to define a wrapper class which proxies the calls to the real objects. Even though the methods for the two classes have matching type signatures and do the same thing already.
Haxe has a similar (but not quite identical) bag of tricks which act as a "gradient" of stronger/weaker couplings.
You have to specify that an implementation can use a specific interface, but you may overload an implementation with multiple interfaces. http://haxe.org/manual/2_types#interfaces
I don't completely seee the point - in more established OO languages you'd make two one-line wrapper classes that both implement Get() and be done with it. Not stuck at all.. Extra indirection, yes, but is your performance going to block on parsing mime mails?
I assumed you would do the same thing as was done in the article.
Create an interface and have a function that takes implementations of that interface as a parameter. The difference would be that in Java or C# you would need to explicitly mark Header and MIMEHeader as implementing emailHeader
I don't think much will come of discussing what is/isn't possible in a language given that they are all Turing complete. Usually posts like this are used for discussing what's easier or cleaner in a language.
For what it's worth I think having dynamic interfaces is cool, but I can only really think of one place I have actually relied on that capability.
Mostly the "dynamic"ness is nice for decoupling code. You can have a package(A) containing a function that takes an interface a parameter, a package(B) containing an implementation of the interface in A and a package (C) which passes the implementation in B to the function in A.
Given this situation C needs to import A and B but neither A nor B need to know about each-other.
> To be fair, you can do this in java by adding to Header and MIMEHeader "implements emailHeader".
Not if you aren't the author of the package. (You could modify your local copy, but requiring a fork is always a pain.) Without the ability to modify the sources, you would have to create custom subclasses of Header and MimeHeader which implement that interface, and find some way to make the library generate your custom subclasses instead of the defaults.
One potential downside is that it's a lot harder or more computationally expensive to figure out class/interface relationships in IDEs or other tools. (I haven't used Go, though, so I'm not sure what kinds of tools are available for it.)
Generally, "what classes implement this interface?" is a question I need to ask quite a bit when working with Java in Eclipse. The simple way in Go (looking through all classes) would generate a bunch of false positives in a large enough codebase, and the more precise way (looking through all implicit casts everywhere to find all classes that are implicitly casted to the interface) is probably pretty slow or memory intensive and could potentially have false negatives (e.g. implementing classes that haven't been hooked up yet, and uses that aren't contained in your source tree).
There are other Eclipse operations that rely on this that would also be a lot harder to implement (in the same way they're implemented in Eclipse):
-"Find callers" on a class method includes calls to that method on superclasses and superinterfaces.
-"Rename method" automatically renames all methods with that name that are reachable through a path of implementer/interface relationships.
There is another downside, and that is when interfaces are used to tag semantics. Say that I have classes that computes metric space distances. I could make an interface MetricSpace using a Distance method. However, in Go, every struct for which the Distance method is implemented with the same signature will implement the MetricSpace interface. In other languages with interfaces, only the types that are explicitly marked as implementing that interface will implement it.
An aside, it is not "duck typing" if it involves a static type system.