It's not the problem that those technologies try to solve, to get rid of subjectivity?. In a way, this could be interpreted as trying to be free of politics.
If my understanding of what they are trying to accomplish here is correct, if the system allow it, then, by definition, it's legal.
If you require a framework where something allowed by the code but with unexpected consequences is illegal, you are again where you started.
You're being thoroughly downvoted, but I don't see anything wrong with your argument. The person who pulled off this "hack" was following all the rules. In fact, I could even see the argument that by undoing this transaction, the people who started the DAO are in fact going back on their word. They promised that they would build a system which would obey certain rules. By undoing this transaction, they are no longer following those rules. Now, obviously there's a strong argument for rolling back the transactions, but at this point ethereum has lost any claim to objectivity or decentralization.
>> obviously there's a strong argument for rolling back the transactions
Absolutely not. Such an act would entirely destroy the credibility of the entire system and its future. If not rolling this back is detrimental to the system, then it has already failed and should be dismantled. Who would continue to use a blockchain that permits such a thing? It completely defeats the purpose.
The apparent typo is odd, IMO. I don't know the language in which that program is written, but looking at that snippet, "Transfer" takes 3 arguments whereas the "transfer" function takes 2 arguments. Isn't there any code review involved when this makes into the codebase. Assuming there was some code review and the reviewer just missed it (which is very much possible), a basic unit test case would have easily caught this bug, isn't it? After all, the test case would have checked for the balance etc... after execution of these functions. Aren't there any test cases around this?
I think after sleeping on it that I agree with you, this is more likely the outcome of bad refactoring. You can read the code commits on github if you like; I don't have them handy, unfortunately.
transfer is a very poorly named function, of that there is no doubt. And Transfer is badly named as well.
transferAndLockTokens vs LogTransfer would be much, much better.
I couldn't follow that Transfer vs transfer call either; the number of arguments don't match, and I don't know where 'amount_' comes from.
The code is exceedingly painful to read, and makes me wish I had been involved in this effort before it made the news. I can't help but think, lack of unit tests aside, is this the first time they're running this code? What sorts of decision-making led to this outcome? I want Hanlon's razor to apply but given the amount of money involved, I am not entirely convinced.
If I understood the explanation on that article correctly, what's even more interesting is that this typo apparently hasn't yet been fixed in the repos (assuming I'm looking at the right ones).
The slockit account repo has a commit which was made 6 days back, with the commit message "Protect against recursive withdrawRewardFor attack" https://github.com/slockit/DAO/commit/f01f3bd8df5e1e222dde62... but it doesn't fix the typo and there aren't any more commit in there after that one.
Does this mean, the upstream repos containing the typo haven't yet been fixed? Or am I just looking at the wrong repos/branches? Or did this typo get known only a few hours back?
If it has a provision for forwarding all assets (with the necessary internal state detailed) to another address/DAO, then you can basically perform a migration to a fixed version.
There is no such provision. You can transfer the "balance" of a contract to another, but not any of it's internal state. This is left up to the programmer and AFAICT, they didn't implement this feature in the DAO contract(s).
This is what I wanted to ask about. I don't even know what language this is. Is Transfer even a legit function, and if not, does this language just keep on chugging for non-existant function calls?
Certainly one of the challenges for the project is implementation and security issues in the Turing-complete language they decided to create for the purpose of smart contracts. Postscript and ActionScript have probably averaged a security flaw a month for the last decade; I can't imagine getting Solidity correct from the start.
Case sensitivity in any programming language is crazy. I can't for the life of me see a valid engineering principal that accepts IsTheOne() and istheone() being different bits of code. Oh sure at a technical level the computer has no problem with... the problem is restricted to those oh so error prone humans.
Can anyone here honestly say that if they were doing a code review they'd agree that solely a difference in case would get their approval?
And don't even get me started on filenames that only differ by case. If that's a good idea then I think trailing whitespace should be significant too!!!
There are languages that use lowercased names for constants (Erlang's atoms) and uppercased Names for variables, which are immutable and quasi constants anyway. This is uncommon but many other languages use uppercase for classes (Person) and lowercase for variables such as Person person = new Person(). That last example is one reson for having case sensitive languages. I can't think of any language I used in the last 30 years that was case insensitive. Maybe some BASIC interpreters on home computers in the 80s? But not all of them had upper and lower case characters to start with.
Linux has a notoriously case sensitive file system, with trailing significant spaces (they are not special characters). Mac and Windows also have case sensitive file systems but that feature is turned off by default.
I don't know why you've been downvoted, but anyone who disagrees with you needs to explain in what situation it would make sense to have two different functions or files with names differing only in case.
I have used all-uppercase to make a distinction like class vs. instance in variables (in case-sensitive languages in which the class might be an ordinary held-in-a-variable value, like Javascript), and I might do it again. But it's very unusual, and it's also the kind of practice that is more suitable for a 1KLOC project that will receive 100 hours of effort from a single maintainer over its lifetime than for a bigger project with many maintainers and a highly motivated community of attackers.
I don't think there was ever a case when I was tempted to name two functions with different cases, but if I ever had to write a modest-sized 1-maintainer system in which many functions came in exactly two different flavors, I might be tempted. (Perhaps threadsafe locked vs. raw? or some C++-like distinction between raw functions and closure-like class instances which can be used in a function-call context? or raw functions vs. wrappers with the extra plumbing required to let them be invoked from a scripting language?)
afterthought: And now that I think of it, in old C code I think I vaguely remember working with macro vs. function implementations of the same operation distinguished by capitalizing the name, and I don't think the name convention was an urgent problem. C macros can breed various errors, but I think bitbang_macro vs. bitbang would breed pretty much the same errors as BITBANG vs. bitbang.
In those situations, it would have been much more readable to have classFoo vs foo, foo() vs nonThreadSafeFoo(), etc.
The bigger point is that while you can come up with creative ways to take advantage of case-sensitivity, it's not that you would have missed it if the language was case-insensitive. From that point of view, case-sensitivity has no benefit, but only a cost: leads to irritating errors from the compiler, or runtime errors in dynamically typed languages.
If something has no benefit, and only a cost, we should get rid of it.
It's not that it would make sense. It is more like it is the programmer's fault.
Next week's stupid but expensive typo will be received() instead of receive(). What will you demand then? That the compiler refuses names based on the Levenshtein distance?
It's the job of a style checking tool, not the job of the compiler.
Saying it's the programmer's fault is another circular argument. It is, only in languages that make it the programmer's fault, but I would say that it's actually the language designers' fault.
That's a circular argument. They are different only because the language says they are different, while the post you've responded to says (correctly) that that's bad language design.
Assuming you are not trolling: yes they are indeed represented with different bits, which doesn't matter since these bits are not the bits executed by the CPU in the end. The "words" and not the bits are interpreted or compiled into machine code. Which is where bits matter.
Some computer languages are case-sensitive for their identifiers (C, C++, Java, C#, Verilog, Ruby and XML). Others are case-insensitive (i.e., not case-sensitive), such as Ada, most BASICs (an exception being BBC BASIC), Fortran, SQL and Pascal.
I couldn't agree more with this sentiment. I haven't spent too much time on Ethereum contracts but this is such a ridiculous language/platform design issue.
Funny thing is, it reminds me of implicit typing of int/float based on the first letter of the variable name that ye olde FORTRAN77 had as a major footgun.
The problem is one of trust. Contracts that can either be formally verified, or by nature of the language they are written in are mathematically guaranteed to be free of certain bugs and behaviors can be 'TRUSTED'.
It's a very bad thing to not be able to trust that your money can't be trivially stolen.
Are we sure that case sensitivity is the culprit? As others have noted, the function being called takes (and is given) 3 arguments but the function that is supposed to be called takes only 2.
For any language that allows non-ASCII identifiers, trying to define "case insensitive" is a minefield best avoided. For example, with general Unicode characters, "case-insensitive" comparison doesn't give you an equivalence relation on identifiers because transitivity fails.
Of course you can define things to be case-insensitive only in the ASCII range, and treat your non-ASCII stuff as second-class citizens...
What a shitshow. These people don't need "advanced static analysis tools" [1], they need a big old bag of clue.
1:
Developers, cryptographers and computer scientists should note that any high-level tools (including IDEs, formal verification, debuggers, symbolic execution) that make it easy to write safe smart contracts on Ethereum are prime candidates for DevGrants, Blockchain Labs grants and String’s autonomous finance grants.
OpenSSL had some code cruft too, and god knows that didn't stop it from being deployed in production. People use code because it works, not code that's perfect.
That's not good enough for software that's meant to be money. Besides, OpenSSL has the excuse of being old. Why would you design a new system to be crufty from day one?
I wouldn't design any system to be crufty or have bugs. I was just pointing out that it happens. And honestly probably gives us a lot cooler stuff than were software production restricted to developers who were certified to produce best practices code.
I feel like without weird blood we'd only ever get enterprise-approved spreadsheet software.
PS: Caveat. Personally, of course I'd never use stuff like that for mission critical deployments until it's been battle tested.
Bored computer scientists: analyze contracts that are less popular but still have money attached to them and see if you can locate a bug. Apparently you can get the money legally and the only way to stop you is to hard fork the entire currency.
It reads like the debugging that you've done a million times if you are working with ordinary, mutable, C-family code.
To start off with, you think you're developing a valuable skill in being able to follow a call stack along with a bunch of variables. It certainly has its uses, but after a while you come to wonder why it always ends up this way. Sure, you can understand every bug eventually, but surely there's some reason why you're constantly reasoning about the state of the variables and the order of the function calls. You can write unit tests, but of course you can't think of all the undesired behaviours beforehand.
The author does mention that something functional with strong types is necessary. Probably a good idea.
As for the split function, there's a question of whether it is really necessary. I wouldn't have thought an investment fund would need a split function. An ordinary fund doesn't; if you're in, you're in for whatever proportion that you've put in. Why not just make a second fund with the same (reduced) code if you have some other group of investors? More functions = bigger attack surface.
It looks like this is a typo. The question is, "does that matter?" It seems to me that either the code means what it says, or it doesn't. If a cabal of humans are able to overrule the machines, then what is the point of Etherium? Wouldn't we just be trading one group of authorities for another?
I want to see Etherium work, and it is extremely unfortunate that basically THE first high profile smart contract has gone sideways. But I'm not sure if I want there to be anything we can do about it. The whole point of this project is that the blockchain is supposed to be like an indifferent force of nature.
I'm curious, are there no tests for the code running this? It seems like some of these bugs would be caught with them (especially the typo). And it seems like insanity to not have them.
So, this review makes me wonder: Will the hard fork of the ethereum blockchain be of any use at all? The DAO appears to be fundamentally flawed, are all the coins there simply lost? What prevents an attacker from exploiting the DAO after the hard fork? Or will they simply invalidate all the transactions that went to the address of the DAO?
As I understand it: The goal of the fork is to freeze the DAO account starting just before the 'split' drain (at the blockchain level). And then figure out a long term solution (possibly using another fork to return all ETH to the original contributing accounts dissolving the DAO?).
The ability of a core of developers and miner to exercise their will over the system like this is alarming though.
This function will reduce user balances, before the vulnerable withdraw function is called. So, instead of the logging function, we should have:
if (!transfer(0 , balances[msg.sender])) { throw; }
This would <snip>.... also reduce the tokens available to the user later on
The more I think of the typo and the explanation about it in that article, the more unclear I am about that whole code.
Keeping aside this hack, for a moment, if that is indeed a typo and instead it should have called the lower case transfer method to really reduce the user tokens, then does this mean that all these DAO contracts (or whatever the right term is) that have been executed till date have been affected by this and as such the entire eco-system state was already messed up before this hack?
Nothing in that article suggests that this code flow, where this apparent typo is present, is applicable only for this specific hack.
So the bug is only exposed if the recipient of the reward transfers out the balance during the execution of withdrawRewardFor().
In fact, to me it doesn't look like there's a typo at all there - the call to Transfer() was fully intended just to log the burning of the tokens that will be accomplished by the later lines setting the balance to zero. The bug is simply that the balance isn't adjusted before the attacker gets a chance to execute code.
The biggest problem I see with the code at https://github.com/slockit/DAO/ is absense of the unit tests. The code looks complex, how the hell can you make sure that it's right?
The Solidity wiki doesn't have a single entry on testing either:
I don't get how the transfer vs Transfer typo is supposed to have enabled a larger attack. The token balance is zeroed in the splitDAO function anyway, and presumably the sanity checks in transfer are also redundant, or else there would have been no need for recursion.
if you could 100% accurately and reliably model a contractual understanding in any kind of language, it would have been done eons ago. human language (i'm fond of english in particular), is extremely expressive.
but there's a reason we still have courts and judges. it's because language is the tool through which humans express their desires and fears (computer language included, because it is written by humans, so far), it's not a machine.
I don't think that there was any pressure to deeply formalize contrasts before this because even if you did, there would still be recourse to a human judge (not just a compiler). If one party didn't like the strict reading they could sue.
The question that's playing out right now is "is there human judge behind the code." If a set of humans (miners and developers) successfully overrule the code, this will seriously undermine the whole premise of this project.
Modelling financial contracts in an imperative event-driven paradigm just seems like an accident waiting to happen.
EDIT: To the downvoters, if you disagree, I would very much like to understand why, please reply with a comment.
EDIT2: My position is that it is very difficult to reason about correctness and maintain invariants in a highly imperative setting. IMHO, it would be more desirable to use a declarative, or functional language. There are many successful commercial applications of functional programming to financial contracts, e.g. Lexifi, which is based on a subset of OCaml and various in-house solutions developed by banks. Using a functional programming language would also mean the code is far more amenable to formal methods and static verification. One new startup I was very impressed with, aestheticintegration.com, has had success rewriting financial algorithms in a functional programming language in order to prove and verify numerous correctness properties about them. It is a huge shame that folk do not reach straight for the functional programming toolbox when faced with these types of problems.
This is the most insightful comment about this incident that I have seen so far. The problem is that the Etherum system uses an imparative language to describe smart contracts. This opens the door very wide for the kind of bugs that led to the theft of $200M from The DAO. The correct solution would probably have been to design a ___domain-specific declarative language that provides abstract building blocks for smart contracts. This would have allowed to hide the procedural complexity in the interpreter and contract bugs like the present one would not have been possible. Even if there was a problem, it could likely be fixed in the interpreter leaving the contracts and the blockchain untouched! This means that a lot of unanticipated problems would not have been fatal for the contracts. The problem of the current approach is that it conflates the business logic and it's implementation thus increasing the burden for contract authors and increasing the attack surface.
In many other domains, a suboptimal choice of language paradigm would just add some extra friction but in the ___domain of smart contracts it can lead to catastrophic failures. Many people claim that the cause of the present disaster is just a faulty contract and that the Etherum system is working correctly. Some even say that the present case shows how well the technology is working. But willtim's comment makes it clear that there may be a fundamental problem with Ethereum. I wonder whether the imparative paradigm was a conscious design choice or something that just happened because the developers didn't even realize that it was a design choice.
Edit: I wrote this before willtim's added his edit 2.
I think you are placing far too much emphasis on language, especially as so far there's little evidence that some programming approaches yield less logical errors than others. I often see a similar sentiment expressed by people who are familiar with functional programming but unfamiliar with software verification, and may be unaware that nearly all verified software in the industry is written in imperative languages. I know many FP people like to believe that the local guarantees made by their chosen languages end up making their programs more globally correct; it may well be true that some languages make some errors easier to spot for humans (although we have little evidence to suggest that this is indeed the case of FP). However, as far as formal verification is concerned, there is very little overlap between verified software and FP.
Ensuring correctness -- i.e. conformance to a specification -- is, in general, a task that requires effort -- by man and/or machine -- proportional only to the complexity of the algorithm verified (i.e., the number of states reachable by a TM implementing it) and almost not at all to some linguistic properties. That is not to say that languages cannot prevent local errors that may be catastrophic, but the difference between the best of languages and the worst of languages when it comes to verifying spec-compliance is not that big.
> Using a functional programming language would also mean the code is far more amenable to formal methods and static verification
That is not true. If anything, imperative languages have better formal verification tools than functional ones. And besides, for global program "correctness" properties, the choice of a language more or less amenable to formal reasoning may make a difference that is surprisingly small.
Of course most of the money has been funding tools for imperative languages as that is the status quo. But this does not prove that there is little overlap with FP. Research FP languages like Coq have enabled INRIA to build a fully verified C compiler (Compcert), can your tools accomplish this? As far as Europe is concerned, FP does appear to be the future of formal verification.
> Research FP languages like Coq have enabled INRIA to build a fully verified C compiler (Compcert), can your tools accomplish this?
First of all, end-to-end verification is something very unique. CompCert is a great accomplishment, but it is only a medium-sized program, it required a world-expert to write, and even then it tool a lot of effort and still he skimped on the termination proofs.
Imperative tools, OTOH, are used every day by thousands of engineers to write verified software of various kinds (e.g., [1]). So the answer to your question is: except for end-to-end verification, which is extremely costly with any approach, they can do better. In fact, as promising as FP approaches may be, they are still very much in the lab with nearly negligible industry use (and in the one or two cases Coq has been used in the industry, it was always done jointly with academics).
Also, I'm a bit puzzled because you specifically mentioned OCaml, which is an imperative functional language.
In fact, while the difference between programming language in terms of general amenability to formal methods is not great, there are certainly large differences in amenability to model checking. This is why a language like SCADE (based on Esterel and the FRP-ish Lustre) is a fairly widely used tool in aviation software. If anything, a synchronous language like Esterel or the modern Céu[2] are a better choice if you want your verification to be as automatic as possible and run in a resource-constrained environment (that's killing two birds with one stone).
> As far as Europe is concerned, FP does appear to be the future of formal verification.
Even if you look at INRIA, the most European of institutions and the birthplace of Coq, you'll see that they dedicate significant verification research to other approaches[3]. At Max Planck, FP is a clear minority[4].
And if you look at the academic community at large without artificially restricting it to Europe (which certainly has a linguistic bias), you'll see that the verification community is certainly not betting on FP (although they are certainly exploring it). Don't get me wrong: FP and linguistic approaches is certainly one avenue among several showing promise in verification. But it is downright false to say that this approach is recognized as "the future of verification".
I mentioned Lexifi, a contract language, which is based on OCaml and is not imperative.
I restricted my argument to Europe as we appear to be less commercially influenced (e.g. MIT teach freshmen Python, Oxford teach Haskell). Predicting the future is always subjective. I offered only my opinion.
> I mentioned Lexifi, ... which is based on OCaml and is not imperative.
Oh, sorry.
> we appear to be less commercially influenced
I don't think this has anything to do with commercial influence -- just academic tradition and maybe even philosophical influences: I think you'll get different answers to the question, "does an algorithm or a computation exist outside its description in a language, and if so, in what way?" This may be tied to actual philosophies, Derrida in France vs. Kripke in the US[1] (to prove the connection, a "Kripke structure" is a very common mathematical structure denoting an "abstract" computation, which is used commonly in non-language-based verification). I've been told that the study of program semantics tries to reconcile the two.
As to Haskell vs. Python, first, MIT used to teach Scheme; they have explained their reasons for switching to Python. Second, this difference is more indicative of the competition between "theory A" and "theory B". Theory A studies algorithms and complexity (and Haskell is not the best first pedagogical choice for that), while theory B studies logic and semantics (and language). Theory B is relatively small, but it does have a stronger presence in Europe. Although, even within theory B there seem to be those who prefer linguistic approaches (semantics) and those who prefer conceptual approaches (Kripke structures, abstract machines).
[1]: Not that Britain is too fond of French philosophy, but the Brits had Robin Milner, who is as close to a CS deconstructionist as you could find.
It's worth pointing out that most of the formal methods you have linked to involve contracts or specifications as separate stratified layers written in various logics. No Turing Machines involved. This is essentially functional programming as far as I'm concerned. The advantage of the dependant-type approaches like Coq and Agda, is that the language of proof and implementation is now one of the same, but I was not necessarily advocating their use as you seem to have assumed. Even without dependent types, a total functional program can serve as both an executable specification and a logic with which one can derive proofs from (e.g. via an SMT solver). This is what I meant when I said functional programming is more amenable to formal methods. So any tool that uses dataflow analysis to convert imperative programs into functional ones, or requires me to write functional programs to describe what the imperative program should be doing, doesn't provide a counter argument in my mind. Finally, in my very first comment, I provided a link to a startup who are planning on using FP and SMT solvers for formal verification of blockchain contracts, so watch this space...
When are Turing machines ever involved? Verification tools often make use of abstract state machines as the subject of their study. I am not aware of a single one that deals with Turing machines (nor am I aware of a single TM programming language being used in the industry). TMs serve an important role in theoretical computer science, in automata theory and complexity theory, but not in verification or PL.
> This is essentially functional programming as far as I'm concerned.
It isn't. Logic was applied to programs long before FP was in vogue.
> The advantage of the dependant-type approaches like Coq and Agda, is that the language of proof and implementation is now one of the same,
Two things. First, you can implement a system in Isabelle, too. Second, that advantage -- if it is one -- is rather theoretical at this point. Dependently typed languages have never been used outside of the lab, and have never been used in a large project. The approach, BTW, has plenty of disadvantages. First, the languages are very hard to learn, and require some very exotic, obscure math; second, some very important properties like time and space complexity -- that are often just as essential as logical correctness for those application where correctness matters most -- are hard to express, and there's no consensus yet on how to best reason about them. In short, dependent type approaches may one day be useful, but in general, we're not sure exactly how to use them yet.
> a total functional program can serve as both an executable specification and a logic with which one can derive proofs from
That is absolutely true, but this has some very severe disadvantages as well.
> so watch this space...
I have no doubt. But you should be aware that FP approaches to verification are just some of the many avenues in verification research these days, and are not generally accepted to be the most promising ones. In addition, unlike other approaches, they have pretty much never been put to the test outside the lab. Finally, SMT solvers are successfully used to verify Java programs (with JML specifications), and in the research world, Microsoft's Dafny[1] -- an imperative, non-functional (I think) language -- is a very popular choice in verification challenges (alongside tools like KIV[2] and Why3[3]). They also, I think, serve as the core proof method of SPARK Ada, a heavy duty verifiable language, with a good track record over many years in the industry.
Most software is written in imperative Turing equivalent languages, so Turing Machines are involved all the time. The fact that most verification software is not capable of dealing with them, demonstrates a language issue. Which is the point I have been trying to make.
> It isn't. Logic was applied to programs long before FP was in vogue.
I'm advocating modelling contracts with Logic, instead of Turing Machines as used by Ethereum. I didn't mean to create a tribal boundary by using the term FP.
> Most software is written in imperative Turing equivalent languages, so Turing Machines are involved all the time.
I don't understand what that means. Everything is "Turing equivalent" in some sense[0]. It's like saying that since every language can be compiled to C, then C is "involved all the time". That would even be more accurate, because our programs are most certainly not TM-equivalent. Proof: the asymptotic complexity of a real program and that of a Turing machine for most algorithms differs by a power of 2 or 3. Therefore, programs and TMs cannot possibly be equivalent. If you had said RAM machines at least you would have been somewhere in the ballpark, and RAM machines are not Turing machines (see https://en.wikipedia.org/wiki/Random-access_machine).
> The fact that most verification software is not capable of dealing with them, demonstrates a language issue.
But that is not a fact, because 99% of the thousands of very real, verified production systems being constantly worked on are written in imperative languages, and roughly 0-1% is written in functional languages. Quite the contrary, verification software handles imperative programs better than functional programs. You don't have to take my word for it. See what one Xavier Leroy (author of CompCert) says on the issue here[1] and here[2]. So far, verification in functional languages requires inordinate effort.
The question of how much verification is tied to language is very much an open question.
> I'm advocating modelling contracts with Logic, instead of Turing Machines as used by Ethereum.
I wonder what you'd say about temporal logic, which is not only the most popular logic in software verification but one that models state machines.
As to Turing machines, I don't think you understand what they are. A Turing machine is one particular automaton, a particular instance of an abstract state machine, and a universal computational model, which is usually applied directly only in computational complexity theory, where it serves as a common cost model. The lambda calculus is a rewriting system (typed versions of which can be tied to one particular kind of logical reasoning via the C-H correspondence), another universal model of computation, and -- guess what -- another instance of an (ND) abstract state machine!
In short there are many concepts here that you mix up. Turing machines, however, are one concept that is completely absent from both PLs and software verification (where it tends to turn up only in the marketing speech of some FP advocates). No one programs Turing machines. Program logics most certainly apply to imperative languages, and in fact, this is -- to date -- where they are most used, and quite successfully.
[0]: And if you're referring to total languages then you're confusing things further. Totality has little effect on verification; it is required in order to not introduce logical inconsistencies in typed lambda calculus, and is only relevant when dependently typed languages are used to prove general mathematical theorems -- not verify programs.
I can assure you I understand what a Turing machine is. I brought them up because the less powerful abstract machines you mentioned are not sufficient to (fully) express Ethereum or most other mainstream imperative languages.
You seem to argue that imperative programming is just as mathematically rigorous as FP, because one can have abstract state machines in mind, yet strongly object when I compare Ethereum to a Turing Machine. You also appear to suggest that formal verification has no problem dealing with computations that require Turing equivalence (by dismissing my point as 'false') and then state that Turing Machines have no place in formal verification. If I sound mixed up it's because I'm trying to make sense of your position.
> Everything is "Turing equivalent"
This doesn't make any sense. As you say, there are less powerful automatons. A programming language does not need to support universal computation, for example Unix RegEx.
I cannot disagree that imperative programming is more popular, but again that is no proof that we are better of using them is we want to easily build correct software.
> because the less powerful abstract machines you mentioned are not sufficient to (fully) express Ethereum or most other mainstream imperative languages.
No, again -- you are confusing specific machines (like TMs) with abstract state machines. Abstract state machines are a super-set of Turing machines (some are equivalently powerful in terms of computability, albeit "faster" wrt time complexity -- e.g. lambda calculus or RAM machines; others are less powerful). But Ethereum is not a Turing machine, just like Haskell isn't. Turing machines are a very specific thing, like x86 assembly. Ethereum is not based on x86 assembly, just like it isn't based on Turing machines.
> but again that is no proof that we are better of using them is we want to easily build correct software.
I agree; I didn't say they're any better. But you claimed that FP is better, and there's certainly no evidence to support that. Some extremely powerful formal verification tools, however, currently have no good, production-quality applications for FP.
The confusion here is that you repeatedly seem to misunderstand my point, perhaps that is my fault as I do not have the time to write very carefully (e.g. I've used "Turing Machine" as a cheap shot at imperative programming). You even stated it yourself earlier "because our programs are most certainly not TM-equivalent". If you agree that most programs do not require Turing equivalence, then you must surely understand my main point, which is why provide powerful expressive languages that, for example, do allow Turing complete computations. Does a contract language, or even a systems language need to really support Turing completeness? You seem to argue that it doesn't matter. I think it does.
Oh, you are right that finite-state machines make automated verification easier (assuming that by Turing machine you meant Turing complete). However, some people are under the impression that other sub-Turing-complete languages (in particular, total functional programs) are easier to verify than programs in Turing-complete languages. This is false. The only class of languages that are practically "easy" to verify are finite state machines (well, if you call being PSPACE-complete easy; even that is exponential in the program's size!). This is easy to prove (in fact, it is a simple corrolary of the time hierarchy theorem and its proof). In particular, it is trivial to show that the problem of verifying a program in a total language is harder than any problem with a computable complexity bound, i.e. it's complexity is non-computable, i.e. it has "busy beaver" complexity[1]. I'm giving a talk on this precise subject next month at Curry On (http://curry-on.org/2016/sessions/why-writing-correct-softwa...)
[1]: The difference, therefore, between verification of Turing-complete programs and total programs is that the cost of verifying the former in the general case is infinite, while the cost of the other grows like some function that is larger than any function we can ever compute. From the perspective of computing power, both are equally beyond reach.
I've just now watched the talk [1], thank you for the link, but I feel his conclusion actually backs up my position: "Pure, strict functional programming is a very short path from a program to a correctness proof". Admittedly, the tooling in this area has yet to arrive. But for a bespoke contracts language, I wouldn't be put off by that.
Well, I'd be very surprised if Leroy had said anything else. FP is his field, after all. But this view is far from the consensus in the software verification community. Bear in mind, though, that both (somewhat intersecting) disciplines of PL (especially FP) and formal methods have a lot to answer for. Both have made a lot of grand promises over the past 30 years, and both have failed to deliver anything near what they'd promised.
I agree that functional vs imperative is a red herring in this case.
The Ethereum model is "imperative" anyway, so a functional DSL would probably end up looking like some kind of monad.
That said, it seems interesting to use a language like Agda for writing a DSL to EVM compiler, regardless of the exact nature of that DSL. Then you can have machine checked compiler correction proofs and also prove theorems about DSL programs.
That would be interesting and even necessary. But why not use something much easier than Agda, though not less powerful for this use case -- like TLA+ -- something that even "ordinary" engineers could use to verify their contracts (not to mention that it supports model-checking in addition to deductive proofs, which reduces the verification effort considerably)?
Maybe that would be better. I admit that I don't know the first thing about TLA+. Agda is what I learned about in school, and what my computer science friends are excited about, and I was already interested in using it to write verified compilers. Writing compilers with functional languages was also a part of my university education. But that's just my background.
I see parallels here with UI code that is very imperative in nature, but FRP and similar approaches are showing great promise in creating abstractions for reasoning.
The actual issue is the fact that sending money to someone can result in untrusted code running in the middle of your code execution and even reentering it seems absurd.
Sending should always succeed immediately and if anything needs to be automatically triggered in response, the trigger should be executed asynchronously.
A great way to immediately see the problem is, for anyone with Rust experience, to think of this as a function taking an &mut self being somehow reentered without that &mut self being passed out by the function itself.
Given the sums of money involved I think it might also be worthwhile to have a formal semantics and a logic for proving safety properties of these blockchain programs (beyond type safety).
Not every application would require that kind of rigour but if the participation and value of a given currency/contract/program is determined largely by trust then it seems natural to want more serious guarantees.
I agree. I covered this only briefly under 'formal methods'. My point was that such formal semantics and proofs are far more easily obtained starting from a functional language.
I partially agree — e.g. yes, declarative specifications would have been better. But having looked at some of the code, it's just poor quality. Bad code, plain and simple.
If you write bad code, you will eventually get bitten. If you use bad code to build a financial institution with great claims, it will also cost you money and reputation.
I didn't downvote you, but your comment left me scratching my head. You made a statement without any explanation. Care to elaborate why imperative programming is so bad here? Real-world contracts also use imperative style.
This code is very bad, without going into an exhaustive enumeration, case sensitive function names, variables with unclear scope all over the place and so on.
That it's imperative code is something that I don't think is damning by itself but all the side-effects (including side effects based on a single bit in the name of a function) really do warrant the 'accident waiting to happen'.
While personally I would prefer a functional approach, I take your point that a much more restrictive imperative setting would also be considerably better.
No, I'm saying that if function names are case-sensitive that you're an idiot if you use that feature to distinguish between two functions otherwise named identically with different (important) side effects. The fact that the parameter set is different implies auto completion by an IDE was probably partially to blame here (if you did it by memory you'd have gotten a wrong parameter count warning).
Some language features you best not exploit if you want to write reliable and bug-free (to whatever extent possible) code.
This is not a typo so much as it is a very poor choice of naming convention.
That gets us to 99% of what I had in mind, by preventing bugs like this one, but the remaining 1% is: why, then, generate an error when someone invokes a function while specifying a different case? If there's only one function, does the case matter?
I completely agree. There seems to be less appetite for this than I would have imagined. A rich type system is absolutely going to be a huge help for smart contract authoring.
In addition to this the language is not explicit enough. Even standard Haskell wouldn't be safe enough for a program that manages $250MM directly without safe guards.
In addition to typesafety, it would have to be both explicitly typed at the lowest possible granularity and annotated with pre and post conditions.
Not only did this code allow a bunch of tokens be transferred without compensation, it left the whole accounting system in a corrupted state! One that could have easily been detected and verified at any point.
```
// postcondition: balances[msg.sender] == 0
Transfer(msg.sender, 0, balances[msg.sender]) :: Action<Transfer>;
// Compiler result:
// # Error 1: line XX expected type Action<Transfer> but instead got type Transfer
// # Error 2: line XX violates postcondition, expected balances[msg.sender] == 0, got balances[msg.sender] != 0
```
And the whole function 'splitDao' should have a type annotation that completely captures the possible actions it might encompass, and pre- and postconditions that at the very least constrain the result of the function to not corrupt the accounting of the DAO (i.e. it should still have as many tokens as it believes it has).
It's nice that Vitalik Buterin is a genius, but it shows that this guy is only 22 and dropped out of university because anyone with a degree in computer science knows about this stuff and its importance in high reliability systems. He should have known the contract language should have had typechecked side effect and he should have known proper pre- postconditions would have been the least he should have done.
> It's nice that Vitalik Buterin is a genius, but it shows that this guy is only 22 and dropped out of university because anyone with a degree in computer science knows about this stuff
Personal attacks, which this crosses into, are not ok on Hacker News. Please edit such stuff out of your comments here.
Hi Dang, my apologies, I can no longer edit the comment, please feel free to snip that paragraph out.
I guess I felt the need to accuse someone as I felt frustrated. News like this really smears not only cryptocurrencies but software engineering in general. A single typo cost these people $50M is what they claim, but the reality is that the whole system was recklessly engineered. I guess I'm also saddened that so much money was put behind something that was so clearly recklessly engineered.
Would it been a crazy idea just to invest a million or two of the $250M to pay a Formal Methods team at any university in the world to give the language a quick do-over?
FWIW I didn't interpret this as a personal attack, but as a critical contribution to this discussion: We can't have financial software built by 22-year old dropouts. A new social network? Sure. A game? Sure. A todo list app? Sure. Financial software? No.
If it was phrased as "person XYZ is an asshole!" then I would agree that it's a personal attack.
To counter your attack on Buterin, he is aware that writing safe contract code requires proofs and type safety. That is why solidity is compiled to EVM code, so that improved languages could incorporate this. Solidity is built for rapid innovation and user friendliness. Buterin is now more busy with sharding and scaling issues.
> Buterin is now more busy with sharding and scaling issues.
Those are currently not the issues that require attention.
You can make it as fast and as large as you want, if it is broken at the root that's all pointless. First walk, then run. I'd be more impressed with a much smaller system without sharding and massive scaling properties if it was bullet proof.
Ethereum is a turing machine that runs code, programming languages are more high level. Creating an ADA-like language is trivial compared to making a scalable blockchain that supports computation.
Ethereum itself is more bullet proof, as there are multiple implementations and the description of the protocol better defined and has proofs inside.
That's the whole problem in a nutshell. I highly doubt that is the road to a viable solution but I'm patient enough to be proven wrong in the longer term. But anything that is Turing complete automatically implies that anything layered on top of it will incorporate such niceties as being impossible to formally verify due to the halting problem.
In other words: the only way to establish whether or not the code will try anything funny is to run it.
Not sure why you're being downvoted. This is the problem.
Whenever you want any guarantees about anything you implement a very restricted[1] subset of what a TM can do... at least if you're allowing users to input arbirtrary programs[2]. If you don't restrict to a non-TC language... you're -- in so many words -- fucked. (Hence the "hack".)
[1] Preferably using a statically typed PL, for obvious reasons. There's actually a thing these days where non-TC languages can actually "do things". My favorite moniker for this is "pacman-complete" (coined by Edwin Brady, I believe.)
[2] Having looked at a few .sol files earlier, they at least look TC, having general recursion and all.
> But anything that is Turing complete automatically implies that anything layered on top of it will incorporate such niceties as being impossible to formally verify due to the halting problem.
Not sure I follow here... You can certainly run a non-turing-complete language on top of a turing-complete one and anything in that language could potentially be verifiable.
When I had my own start up companies or worked for those of my friends, we ensured that the first few sets of big customers got white glove treatment from the executive team....
This is what happens when one does not do that. :)
"It's nice that Vitalik Buterin is a genius, but it shows that this guy is only 22 and dropped out of university because anyone with a degree in computer science knows about this stuff and its importance in high reliability systems."
I wouldn't be so hard on the kid. Now, the due diligence that should have been performed by the corporations that sank millions in to this project, who actually do hire plenty of people who are well educated and should have known better is another matter.
But I suppose jumping in early in to the tulip craze and riding it for all its worth is more important than taking the time and effort to make sure the system is reliable and robust before it goes live.
I know I'll get napalmed for saying this, but why is this all over HN? Why are people obsessed with cryptocurrencies?
It's bad enough that those with sufficient computing speed/power can already rip off the stock market; why does the world need another way for people to rip each other off?
Since we've determined over history that some people will take advantage of weakness for their own gain, no matter what system you create to transfer goods, resources, services or value representing those, it will be exploited. So- why even spend time on it?
If everyone who cares to maintains their own accounting data does so, regardless of what technologies we are using, we really could just trade in goods, services, and coin and completely get rid of all virtual currency. Investments could go back to literally to providing coins, goods, resources, or services to those that we'd like to support possibly in exchange for reciprocity.
Since that won't happen right away,
if you really want to secure your financial future, major S&P 500 index tracking funds have been one of the smartest things over its history to invest in: http://finance.yahoo.com/echarts?s=%5EGSPC+Interactive#symbo...
While some of you made money on bitcoin, many lost. Now the same game has been played again with people believing that someone can write code to replace our currency system and if they get in early enough, they'll be kings or queens. Well- again, it didn't happen. What really is to gain from this fantasy world of cryptocurrencies? It is harming more than it is helping, from what I see.
And if you downvote this, tell me why, otherwise I'll just assume you are with those that want to exploit me and others.
Because they are interesting in a theoretical way and because real world implementations are even more interesting, they show us what the future might look like.
> It's bad enough that those with sufficient computing speed/power can already rip off the stock market; why does the world need another way for people to rip each other off?
That's not the intent.
> Since we've determined over history that some people will take advantage of weaknesses for their own gain, we should assume that as a given until proven otherwise.
Agreed.
> No matter what system you create to transfer goods, resources, services or value representing those, it will be exploited.
That remains to be seen. There is an outside chance that it is possible to create something solid, my guess is that it will have to be something extremely simple.
> So- why even spend time on it?
Just like it took a while to map the globe, mapping this 'space' takes time - and effort - and funds. I'm fine with it, it isn't my time, my effort or my funds and I end up learning just as much as those that put forward their resources. Think of it as free education on someone else's dime. That's a pretty cynical view but to date I have not seen any system that appears to be solid enough to warrant backing.
> If everyone who cares to maintains their own accounting data does so, regardless of what technologies we are using, we really could just trade in goods, services, and coin and completely get rid of all virtual currency.
That would severely limit the amount and kinds of trade possible. On the other hand, a really strong cryptocurrency has risks all its own and it is worthwhile to consider the downsides with the upsides.
Isn't this a somewhat meaningless statement? Is it possible to prove that "no matter what system you create to transfer goods, resources, services or value representing those, it will be exploited"? Is it possible to prove the contrary (that there is or could be such a system)?
It will never be seen because we will never know for sure one way or the other. Best we can do is to make educated guesses based on past experiences.
Let's take banking as an analogy: is it perfect? No, it definitely isn't. But it is 'good enough' to power worldwide commerce.
So until these systems reach reliability parity with 'ordinary' banking we'll see reduced trust. But once a system gets launched that has staying power and that appears to have made the right decisions in the founding phase you can expect more and more commerce to ride on them.
When more commerce is involved the 'bounty' on finding a flaw increases and so any system that carries a substantial amount of commerce and that keeps on standing can be presumed audited to that level.
So it really remains to be seen, which group comes up with a system solid enough to carry a substantial fraction of the worlds commerce. Bitcoin is interesting because even after a couple of years it is still standing, which all by itself is impressive.
Will we ever know for sure? I suspect we will not because this is akin to proving a negative, just like you can never be 100% sure that you've found the last bug in a system. But at some point the premium will be so high that failure to exploit the system should give good confidence that all avenues for exploitation have been exhausted and that even if the system still contains flaws those flaws may not be exploitable. It's an asymptotic curve.
I don't understand how this is even comparable to "banking". It seems like more a replacement for wire transfers. I don't see how it changes the face of investment or the flow of capital.
I think maybe you're a bit confused about what HN is, it is a community that gathers and discusses news that is relevant to people who are starting or funding new companies, mostly in the information technology sector.
Cryptocurrencies attract money and spawn companies, there are YC funded startups that are based around cryptocurrencies. If you're looking for refuge from them, you've gone to the wrong place, this is basically the lions den. If there's money to be made with technology, this is where it'll be discussed, no matter over whose back.
Just that it's on HN doesn't mean HN or even YC supports it. If everyone on HN would support ETH, BTC or the DAO, it'd be worth even more. It's just interesting technology with relevance to business and finance.
> it is a community that gathers and discusses news that is relevant to people who are starting or funding new companies, mostly in the information technology sector.
Been here since 2007 off and on. Usually don't use same user because I quit HN periodically due to philosophical differences, and I don't care about HN user karma or the ability to downvote.
It's been my experience that people here have many interests, and debates of we should be exploring certain avenues and why we obsess over things are just as valid as any other.
'When disagreeing, please reply to the argument instead of calling names. E.g. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."'
'Please don't submit comments complaining that a submission is inappropriate for the site. If you think a story is spam or off-topic, flag it by clicking on its 'flag' link. If you think a comment is egregious, click on its timestamp to go to its page, then click 'flag' at the top. (Not all users see flag links; there's a small karma threshold.)'
>Now the same game has been played again with people believing that someone can write code to replace our currency system and if they get in early enough, they'll be kings or queens.
Is that all you see?
Decentralized currencies, ethereum and DAOs are fascinating from a technological standpoint. This is major news in the field of smart contracts, which are related to both programming and technology. How could this not be posted here?
That is extraordinarily pessimistic and rather short-sighted, to be blunt. The point of cryptosystem-based financial tools is that they're more... raw than the normal financial tools we have access to historically. They work by a much stricter set of rules. This means it's easier to mess up and lose money, but it also means they're more efficient, predictable, and equitable.
The existing financial system is inextricably entwined with our political and legal system. This is both good and bad. Bitcoin, ethereum, etc. are efforts to make financial systems that are independent of politics and central management. In my opinion, Bitcoin has done a much better job, because it's actually reasonably decentralized and they don't change the rules whenever people get their money stolen.
A number of internet forums have branched off from HN to fill niches that HN doesn't fill. I'm beginning to think that somebody needs to fork a forum for Schadenfreude. We don't want comments like this here.
Just imagine if you will if all members of the site created similar comments constantly. That would hide all of the discussion relevant to the topics being discussed.
But, I agree with what you said- I don't like it when people enjoy suffering either. I just think that possibly the best way to handle it would be to just ignore those posts, as you might elicit more response from them.
Because it is an insanely interesting problem to try and solve technically.
That is, a medium of exchange is essential to a modern economy and yet how to objectively regulate the supply of that medium of exchange without exploitation can be considered somewhat of a paradox. The quest is to solve the paradox via ever evolving technological solutions.
Even though I disagree with some of your conclusions, I personally have no problem with your comment except for the very first and very last sentences. I generally downvote people who complain about submissions or about their comments being downvoted, which I view as unproductive.
I feel the last sentence, unfortunately, is relevant. Too many people now just downvote without explanation, and that is not helpful to the conversation. In something like this, which is a question disguised as a rant or vice versa, an explicit request is, unfortunately, necessary today.
Any one who agree with the narrative outlined in Flash Boys (which I do), thinks that HFT is basically a drain on the economy. It doesn't help allocate capital more efficiently or provide any liquidity when it's actually needed. It uses speed and deliberately rigged market rules to take returns fro people that are trying to put capital to work (either as investors or as businesses).
I don't participate in virtual currencies because I am not smart enough to understand all the nuances, or at least I don't take the time to try and understand them. Plus, I don't have money to throw away. However I find the discussions about the DAO fascinating because of the intersection of technology, law, ethics, morality and government.
I agree that greed is a world wide problem.
But even when we trade in goods services and coins it doesn't change the fact that a lot of people are greedy.
This is on HN because this story is interesting from a technical point of view.
It's a very interesting combination of a lot of HN related subjects. Like networking, cryptoanarchism, decentralisation , computer science and scaling problems, cryptographic solutions and finance platforms.
What not to like about reading about news and opinions about this ? I find the current obsession with deep learning/AI here much more uninteresting
And also, just read the first word of the name. HACKER news, which is very self explanatory
I do not denounce your assertion(s), but, I would like to read how you apply them to Ethereum. To me, the literature seems very much to be attempting to prevent from baking in our (human) duplicitous nature, opting instead for an almost forced virtue. Please suffer a fool and expound/expand for me?
I'm sorry for violating an HN guideline by saying "I know I'll get napalmed for saying this". I was not aware of that guideline, and have been in the HN community for about nine years now.
It's this an attack?
It's not the problem that those technologies try to solve, to get rid of subjectivity?. In a way, this could be interpreted as trying to be free of politics.
If my understanding of what they are trying to accomplish here is correct, if the system allow it, then, by definition, it's legal.
If you require a framework where something allowed by the code but with unexpected consequences is illegal, you are again where you started.