Hacker News new | past | comments | ask | show | jobs | submit login
Notch's Specification for the In-Game DCPU-16 (0x10c.com)
263 points by pros on April 4, 2012 | hide | past | favorite | 172 comments



Just waiting for the first post about programming this CPU to pop up on Stack Overflow ... :)

It's quite an interesting architecture. From an initial perusal, I found these features note-worthy:

* Explicit access to PC as a register, makes "computed goto" trivial and very natural.

* Because of the above, there is no JMP instruction.

* Treating operations on registers as "values" makes the instruction set very orthogonal.

* No instructions for bit-manipulation.

* Lots of possible NOP instructions, like "SET PC,PC" (which I think should work, depends a bit on exactly when it's incremented but it looks safe).

* Word-addressed memory will make string processing interesting to implement. Maybe in space, everyone uses UTF-16.

* Single 64 K address space will put quite the limit on the amount of code, I guess the game will show how much code is needed to do anything.

* No I/O instructions means I/O must be memory-mapped, further reducing the space available for code. Maybe there will be bank-switching?

* No interrupts.


> No instructions for bit-manipulation.

What? Plenty of opcodes for that. SHL, SHR, AND, BOR, XOR are all bitwise operators.

Unless you mean bitset and bitclear macros, but no self respecting embedded programmer uses those. I've disagreed with almost everything else Notch has done, but from a simple pedagogical standpoint, he's doing the right thing by leaving those out.


Why do you disagree? I've only been following this casually and don't now too much about the depths of what's going on.


Yeah, that was I meant, and they certainly don't have to be "macros", they can be full-blown instructions. Considering that even ARM's smaller "Thumb" instruction set has a bunch of them, I doubt that they're never used. Or, of course, they're just used by embedded compilers. :)


No rlwinm


I'm a self-taught programmer, and don't know much about CPUs. Would someone mind putting this into relatively plain english, I'm quite curious to know the practical implications of this. Can we expect to see Python running on it in the near future for example :) ?


I strongly recommend you to read the course book "The Elements of Computing Systems" [1]. This explains the whole principle in very plain English. It has an extremely intuitive and easy to follow logic and set of exercises you can do.

It has completely demystified the whole low level world of computers for me.

If you can't get the book, a number of exercises and chapters are available for free at this website[2].

[1]: http://www1.idc.ac.il/tecs/ [2]: http://diycomputerscience.com/courses/course/the-elements-of...


I would be astonished if there wasn't soon a FORTH running on it. And something like http://code.google.com/p/femtolisp/ might not be totally out of the question.

But who can say. The second Fortran compiler I used ran on an IBM 1800 (equivalent to the IBM 1130, but for process control) had something like 29 phases, and ran on a machine with a total of 4k 16-bit words.


I wondered if I'd be the first to suggest a LISP implementation... glad to see I wasn't.


Pico Lisp might be a good starting point. Its interpreter for x86-64 machines is written using a simplified custom assembler and already uses a reduced number of registers.


Good question. To run python on this CPU, someone (you?) needs to port Cpython to it. A CPU runs assembly language, and it's not hard for a hacker to port C to a new CPU, because C is small. In addition, implementing C is the sort of thing that hackers like to do. So you can count on C running on any given CPU, virtual or real.

http://en.wikipedia.org/wiki/CPython


This thing will have a total of 128kB directly addressable ram (as 64k 16b words). There's no way, no how that CPython would ever do anything remotely useful here.


It depends what the purpose of the DCPU is. If it's the scripting engine, well, you can easily write an aimbot in 128KiB of .pyc files and the python VM. If your only access to the world is through the DCPU, yeah, you're probably out of luck, but the second scenario makes for a much smaller and less interesting game.

It's also an obvious in-game/micropayment reward to expand the address space, somewhat analogously to the N64's memory expansion pack.


"real" python seems unlikely, given that the standard executable is well over 64KB in size.

A variant like cython that compiles to this assembly? maybe...


You don't need the full python repl/compiler/interpreter/libraries, you "just" need the python virtual machine. Once there's a C compiler for the platform that should be relatively straightforward, because I sincerely doubt that takes all 128KiB.


I wonder how hard would it be to get a working DCPU-16 backend for GCC or Clang.


GCC already has 4 other 16-bit backends (e.g. pdp11), so it should be achievable.


A Clang backend means a LLVM backend, which I believe is a lot easier than extending GCC, and gets you support for all sorts of languages.


RPython, perhaps?


Although some of the implications require experience to understand, I'd say it already is in "relatively plain English", you simply haven't been exposed to the vocabulary and concepts necessary. Trying to explain in any detail would basically be reproducing Wikipedia, so here are some links:

http://en.wikipedia.org/wiki/Program_counter

http://en.wikipedia.org/wiki/Word_(computer_architecture)

http://en.wikipedia.org/wiki/JMP_(x86_instruction)

http://en.wikipedia.org/wiki/NOP

http://en.wikipedia.org/wiki/Orthogonal#Computer_science

http://en.wikipedia.org/wiki/Address_space

http://en.wikipedia.org/wiki/Memory-mapped_I/O

http://en.wikipedia.org/wiki/Interrupt

And no, don't expect Python anytime soon. Expect a C compiler, FORTH, possibly some sort of Pascal, but a highly dynamic language is unlikely. The system is too resource-constrained to make it practical. A static language that looks kind of like Python isn't out of the question, but it won't do a lot of the things you expect from Python, Ruby, PHP, or Perl.


> a highly dynamic language is unlikely

Lua, perhaps? There was an article how to reduce its binary size, for an older version of the language (Lua 4.0): http://www.lua.org/notes/ltn002.html

With no reductions, and for x86 assembly, it started at ~64KB, so not very useful here; but after dropping standard libraries and parser, they got to ~24KB. Now however, some further questions arise I'm not sure about:

- whether such a virtual machine, when without parser, would be anyhow more useful than the underlying system alone?

- how much the binary code would be bigger when compiled for the "DCPU-16" instruction set instead of x86?


Lua is a good place to start! And the VM minus parser is useful, though it may make debugging harder. You simply write out the bytecode and upload that.

The VM is only part of the problem, though. I consider the bigger problem to be memory management. A garbage collected language operating reliably in 128KB? I'm deeply skeptical.


While I don't think Lua would be the choice, it's the quantity of garbage that matters:

"Historically, languages intended for beginners, such as BASIC and Logo, have often used garbage collection for heap-allocated variable-length data types, such as strings and lists, so as not to burden programmers with manual memory management. On early microcomputers, with their limited memory and slow processors, BASIC garbage collection could often cause apparently random, inexplicable pauses in the midst of program operation. Some BASIC interpreters such as Applesoft BASIC on the Apple II family, had terribly inefficient garbage collectors for strings which repeatedly scanned the string descriptors for the string having the highest address in order to compact it toward high memory. This one-string-at-a-time processing loop resulted in O(N * N) time performance in the number of strings, which would introduce a pause more than one minute long into the execution of string-intensive programs. A replacement garbage collector for Applesoft BASIC published in Call-A.P.P.L.E. (January 1981, pages 40–45, Randy Wiggington) identified a group of strings in every pass over the heap, reducing a pause of two minutes into less than a second depending on the size of the group. Other approaches were published, but none ever made it into a new revision of the BASIC interpreter."

http://en.wikipedia.org/wiki/Garbage_collection_(computer_sc...


Garbage collection isn't necessarily that difficult, especially when you're in a single-threaded (or cooperatively multithreaded) environment and don't have to contend with an OS. Here[1] is a cons-pair system I wrote in Forth which uses tag bits to identify pointers, a mark-and-sweep garbage collector and a free-list style allocator. GC takes linear time with respect to heap size, and allocations are constant time. The allocator itself (not counting all the list utility routines) is about 60 lines long and fairly easy to understand.

[1] https://github.com/JohnEarnest/Mako/blob/master/lib/Algorith...


A smarter way might be just implementing a new LLVM backend (gives us C via Clang, some other languages). However, even with C, the standard library probably won't leave much space for the user code itself -- so you might need something slimmer. Also, some kind of simple RTOS (www.freertos.org, maybe?) to run all these programs (I assume people would really want to run several programs at a time). I wonder if the 128k limitation was intentional, to make programming the computers more interesting. Anyway, I'm actually looking forward to seeing what kind of clever solutions the players come up with.


An RTOS won't be of much use without interrupts. You could implement coroutines, but preemptive scheduling is not possible.


Moreover, it would be very surprising if low-latency or deterministic timing properties would actally be useful in the game.


The ANSI C library is quite small. It originates in an era when 128KB of RAM was reasonably roomy. To this day, there are C libraries designed to fit comfortably within that size.

Most of the library code will never even be present anyway. Your (statically-linked) binary will simply contain those functions actually used, which is likely to be a small subset of what is available.


Reading all these wikipedia pages is going to kill at least an hour of productivity today :/


Depends on what you define as being productive. I personally perceive learning as something productive.


And it will enable countless years of productivity!


> No I/O instructions means I/O must be memory-mapped, further reducing the space available for code.

In a tweet, notch said It's going to be dynamic later on, but right now [the keyboard is] a cyclic 16 letter buffer at 0x9000. 0=no char yet. Set to 0 after read. http://pastebin.com/raw.php?i=aJSkRMyC



The thing that looked odd to me was:

>* IFE, IFN, IFG, IFB take 2 cycles, plus the cost of a and b, plus 1 if the test fails*

It is a long time since I worked in assembly, but I don't remember comparison functions having different timings depending on results when I were a lad.

(FYI, most of the assembler I played with was for 6502s (in the old beebs) with a little for the Z80 family and early x86)


Many CPUs with branch prediction carry a penalty of a least one cycle for mispredicted branches as the fetch stage(s) of the pipeline must be invalidated. From wikipedia:

The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the higher the need for a good branch predictor.

http://en.wikipedia.org/wiki/Branch_predictor

EDIT: the inclusion of this is somewhat interesting as there's not much of a point in simulating a pipelined processor unless you care about hardware details. My best guess is they're adding this "feature" to make compilation to assembly MORE difficult and increase the advantages of hand-compiled assembly. Branch prediction is a tricky thing to do right in compilers.


Ah, obviously my experience is somewhat out of date! My "hacking cycles off loops in assembler" days were all before I got my hands on any kit advanced enough for pipelining and branch prediction to be a consideration.


AVR behaves similarly. Branch instructions take 1 cycle if the condition is false, and 2 if the condition is true.


low and mid-range PICs too


It seems to me the extra cycle is used to read the first word of the next instruction, since it needs to know how long the next instruction is to skip it.


> Explicit access to PC as a register, makes "computed goto" trivial and very natural.

Am I right in thinking ARM uses this model? I haven't worked on them since ARM2, but I have a hazy recollection...


Sort of. R15 is the PC, but you also have branch instructions which have very large literal values which in practice you'll always use.

http://www.wss.co.uk/pinknoise/ARMinstrs/ARMinstrs.html#Bran...


Yes it does.

(Thinks of a typical SWI handler...)


> * Word-addressed memory will make string processing interesting to implement. Maybe in space, everyone uses UTF-16.

Everyone in Minecraft, too -- almost. The string encoding in Minecraft's protocol spec is UCS-2, just a sneeze away from UTF-16. It seems Notch has a soft-spot for large encodings. It makes sense from a calculation and lookup perspective, but I wonder if the increased bandwidth and storage of 16-bit blocks has a measurable impact.


I think Java strings are UCS-2, so that explains why Minecraft uses this?


By going UTF-16, he would make the game much more accessible to those not using a latin alphabet.

If I was designing the game, I would favor internationalization vs space efficiency in the virtual computer. That way russian kids would get to have fun learning to write silly programs too.


UTF-8 still has all the characters UTF-16 does it just uses more bytes to encode them. UTF-8 makes more sense if most of your characters can be represented in 8 bits.


you can encode all characters in the basic multilingual plane with a single utf-16 code unit, while in utf-8 BMP characters have variable lengths from 1 to 3 bytes.

If Notch only allows BMP characters and uses utf-16, internationalized string maniulation is very easy, as easy as ASCII for kids to mess with.


If the DCPU has only 64KB of RAM, encoding strings using UCS-2 or UTF-16 seems rather wasteful.


You're free to choose or write software for it that encodes three 5-bit characters in a 16-bit byte if you like.


The DCPU address 64K worth of 16 bit word (in other words, 128KB, although you can't directly address 8 bits).


The DEC Alpha didn't have byte addressing either.



With cleverness, I could see the memory constraints being not so terribly constraining, even without a (real) bank switching mechanism.

The reason we use "RAM" as we generally think of it is basically that other forms of storage are obscenely slow, right? In this case, however, our "mass storage" device would actually be... RAM. Just not directly-addressable RAM. Loading code and data from "disk" as-needed could be relatively fast.

Edit:

> No interrupts.

Good catch, that's probably something we'll see added. I don't think Notch will like emulating a thousand polling CPUs...

Edit 2:

Just spotted this in Notch's twitter feed: https://twitter.com/#!/notch/status/187454444571598848

"An emulator is coming Eventually(tm). I want to sort out some more IO design first, not release too early."

So, I/O is still up in the air.


Now here's an interesting bit:

"Question: can we trade the programs we create? How will you stop malicious viruses etc?"

"yes. And I won't stop viruses, the players will have to do that themselves."

~ https://twitter.com/#!/notch/status/187474819980328962


I worry about the griefing potential here.

I can imagine nothing pissing off a noob more than getting a virus within 10 minutes of play and having no idea how to stop his ship from self destructing.

Perhaps this will need some sort of firewall system built in where ships cannot communicate unless ports have been explicitly opened. Perhaps some sort of communications proxy that can serve for safe communications.

It could have a system similar to EVE where high security systems have infrastructural to mitigate risk whereas low sec systems lack this but contain the best rewards.


I expect that inter-computer communication will be more like it was in the 80s, with swapping floppies and dial-up BBSs being the norm. It's still possible to get viruses that way, but there won't be worms spreading autonomously through the entire system.

I can imagine how an economy of reputation could arise where some people are trusted to distribute malware-free code.


I was thinking they could treat software and components like we do when granting third-parties permission to access our Twitter/Facebook accounts:

Plasma Shield Generator requests the following permissions:

  * Read/write to the ship's log
  * Draw power from the core
  * Use the red alert system
Plasma Shield Generator will not:

  * Access communication protocols


It doesn't even indicate protected memory segments. You should be able to implement traditional buffer overflows and self-modifying assembly on this processor (that is, you can write to your instructions as if they were any other type of memory).

Here is an analogy, what you are talking about is secure walls with a lockable door; what you have in this chip is some wood, a saw, some nails and a hammer.


It's 128KB of RAM, that's the same as the original Macintosh in 1984. I would be surprised if it ran at more than about 10MHz. And nevermind the constrained resources for implementing any of this hypothetical infrastructure, unless Notch adds an MMU, you can't enforce permissions

You can't think about this in modern terms. It's not a modern computer. It's a 1980s computer, and it's supposed to run a spaceship. Think microcontrollers, not smartphones or set-top boxes.


Couldn't the permission system exist outside of the constraints of the in-game computer (as just some game mechanism)? If each of the ship's discrete components were accessible at some address, couldn't the game enforce the permissions at a level above the simulated CPU?

A rough static analysis of the code might reveal where those components are accessed, and as long as you enforce permissions while the software runs then you should be able to catch anything that tries to slip past.


Someone could write an out-of-game emulator[1] which people could use to test software.

[1] I have no idea what the correct terminology is here.


Go back and look at the first post in this very thread. Notch explicitly intends nothing of the kind, and I certainly can't see why it would be done -- it runs counter to the vision thus far articulated.


I know he said that. I'm following the conversation. I'm just describing an idea for a permission system.


It should be possible to provide programs as "data" that is interpreted by the program "running" on Notch's CPU, thus allowing a fine-grained, whitelisted permissions even without MMU. No?


Depends on how it's structured, doesn't it? It's entirely possible that programs and data will share the same memory with no sandboxing between programs. In that case, then without some sort of MMU a virus could just inject code into an otherwise benign program.

Even with finer program access controls, it would still be hard to protect as long as untrusted code was allowed to alter trusted.


You don't understand what I'm proposing. If you want to put it in terms of sandboxing, what I am proposing is a sandbox.


You would have to abstract the CPU in some way and then write the permissions management in software as part of the interpreter. Bear in mind that this is already only a 16bit CPU with a restricted address space.


You would have to abstract the CPU in some way and then write the permissions management in software as part of the interpreter.

Yes, that's what I said. :)

Bear in mind that this is already only a 16bit CPU with a restricted address space.

I suspect the interpreter could be made to fit, with limited space left for the untrusted (to-be-interpreted) code. As has been pointed out elsewhere, though, if there is a backing store (i.e., simulated hard drive), it's probably possible to have effectively unlimited "RAM" in the game in practice, unless Notch implements a slowdown for accessing the backing store.


True, but your CPU is limited too so having to lex & parse sourcefiles and walk syntax trees to run programs could be a bottleneck there, if you remember old computers there was a significant performance gap between native and interpreted code.

In practice that might not matter for many applications if they do not have to use intensive algorithms etc.

I would still imagine that compilers would be used more than interpreters though which would mean that some security would need to be present at the "hardware" or "firmware" level.

Another interesting thing to think about regards backing stores:

I assume that the programs will be able to take input from the world around them, i.e other ships , objects in space etc.

So what's to stop you from using (perhaps) a tractor beam to re-organise the position of asteroids in space and then "read" that data back using ship instrumentation?

Potentially the entire universe becomes one big DB!


Wow, now that is a very cool idea! I'm really looking forward to the game.


Notch could write some very basic software that everybody can buy or upgrade to. So players will have basic firewalls to start with, and can upgrade to better (but standard) firewalls.

So not everybody needs to actually know how to program, they can just buy stuff from NPCs or from the market off other players. But obviously, the ones who do know the system inside out will have an advantage.


Unix permissions would be a simple layer to add on top. Create a user/group/root structure for read/write/execute (execute?).

A road ___location would be rwxr--r-- while a road texture could be rwxrwxrwx.

Your player's character look could be rwxr--rwx.

A signpost coudl be rwxrwxr--.


It could have a system similar to EVE

I hope this is the opposite of EVE in practically every way... except the high-level concept. Let's not give Notch any ideas.


There were a lot of things that were good about EVE (it's been about 4 years since I played).

Examples would be: The Economy. Ship Configuration System. The Artstyle. The security model. The unforgivingness (made you think carefully before you acted).

The problem was just that the combat always felt a bit bland and favored players with the biggest ISK supply and XP. So in reality you needed to be part of a huge corp and wait for months for skill training to be competitive.


Firewall? Ports? Communication proxy?

It has 64K 16-bit words. What exactly are you expecting the default software to do that opens it up to viruses?


Do not underestimate those old computers. When you remove audio, visual, and textual data from your programs, remove all the code for handling those and slinging them about and processing them, when you remove the textual file formats and replace them with hand-optimized binary, when you give up on any form of structured modern programming and just hand-compile custom assembler, when your opcodes aren't 64-bits wide, and so on and so on, you can get a lot done in 128KB of code.

Obligatory (but illustrative of my point): http://www.pbm.com/~lindahl/mel.html


Exactly. Remember the 64kb video demos from 2004?

http://www.theproduct.de/


Problem was most of the demos used many many megs of ram as they procedurally generated objects and stored them in memory. The 64k demos were disk size.

Assuming memory mapping is used for displays the (128k of ram (64k of words)) is also needed to hold your display buffer. Assuming a mere 320 x 200 x 8 screen will use 1/2 of your entire virtual memory space. Look for demos of what people do on C64 demos for a better idea. Yes you can run a UNIX, yes you can talk TCP... but its going to chew up a lot of the capacity for that infrastructure. I expect very light weight protocols, TCP/IP is likely too heavy weight.


Who knows. Assuming there is some way for players to transfer stuff between their computers and therefor address spaces all you need is one rogue jmp instruction.

Not to mention the usual social engineering stuff. "Hey install this great program, it'll make your guns 200% more powerful"


Someone should port http://en.wikipedia.org/wiki/Contiki for this :)


Wow, like if you are programming assembly, that is a pretty huge space. One example of what can be done in 128k bytes is http://www.ciexinc.com/telemed/index.html.


Nice. With a bit of fiddling one could perhaps invert an enemy's shield polarity. That always seems to work miracles in any sci-fi show.


This opens a whole stream of electronic warfare, however, the problem is that this will be writing programs that have no real-world usage. Except... These programs will be a great way to introduce both kids and adults to programming as learning it will give an edge to people.

Question: Will we need to program an OS? And does it run Linux?


> These programs will be a great way to introduce both kids and adults to programming

This would be a tremendous way to do that.

Programming on normal computers is always kind of disconnected from the real world. So, your program can add two and two. Great. Now what?

Microcontrollers alleviate that issue to some extent. Now your program can walk and talk, and push real-world objects. But you kind of need to learn a bit of electronics too.

Notch's Universe solves this issue neatly. It's a mock real world, with programmable computers. Now your software can do some interesting mock-real-world stuff, like fly spaceships. That's pretty neat.


Learning about assembly/to-the-metal coding is probably not the best way to learn programming. "OP? LOAD? REGISTER? PC LOAD LETTER? What's a word? There's only 8 registers? Does that mean I can only save 8 things? What do you mean I have to push things into a stack?"

This is why most people advocate learning python or ruby. You don't have to deal with the underlying manipulation of the computer until you've decided you actually like programming.


I don't know about "most people" but I think a machine like this DCPU-16 is better than learning Python or Ruby. Those languages are huge and full of complexity. With a 16-bit machine with a tiny instruction set, the student actually has a chance of getting the feeling, 'I know everything there is to know about this system'. And that connects to a piece of wisdom I love from Design Principles Behind Smalltalk: "If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual".

And there's a certain concreteness to such a machine. The mental model has no vagueness at all. You could imagine building a physical copy of it and playing the role of CPU--adding small numbers, moving data between registers and memory, etc. I've played with this idea myself before of using an invented 16-bit machine as a teaching device.

Maybe it's just a case of personal differences, but as a starting programmer, I would have been extremely excited about a simple programmable thing that I can fully understand and use to do things within a game world. Python is cool too but it's overwhelming. There are learn-Python books that are thicker than most of my university textbooks.


Consider that a while back, that was the only way to learn programming.

Personally, I think Python is a great way to teach programming, because it allows us to get to concepts quickly. Accidental complexity is at a minimum, and the inherent complexities of programming become the focus. But, even thought that's my personal preference (based on some experience; I've used Java and C++ to teach programmers and had to explain away accidental complexities), I hesitate to say it's best because I have no evidence backing up that claim.

What's important is learning certain concepts, perhaps the most important of which is algorithmic thinking. That is, knowing what result you want, knowing how you start, and being able to reason through how to get from the start condition to your desired result, step-by-step. What helps is when the result you get is something you care about. I can easily see that for some people, getting something to behave a certain way in a videogame will be a more compelling result than others.


Knuth would disagree with you. And PC LOAD LETTER?


That depends. The Knuth MIX architecture is much like the architecture Notch has in mind. But the more recent MMIX architecture is far more like a modern 64 bit RISC system. In fact, it does look odd that Notch would prefer such an old behemoth nowadays.

An MMIX like ISA would open up the world for compilers much more. With this, it looks like people should be confined to writing simple stuff - which is kinda the point but still rather sad.


I'm sure part of the goofiness of this ISA is just that Notch wants to keep it nerdy and somewhat retro, but I wonder if he also wants to keep compilers from getting out of hand; like, he wants people writing (at least sometimes) in assembly.


I love me some Donald Knuth references


Works for a lot of people. I started with FORTRAN and quickly went to assembly. I recommend doing some sort of assembly for anyone serious about programming.


What do you mean by "serious about programming"? Can a web developer be serious about programming? Assembly is a waste of time for them.


I disagree. I recently designed and implemented virtualized image servers (real ones coming once the Requisition request comes through) for faster image serving. This was basically unheard of by anyone I work with however by having read the http specs I was able to give a report on why we wanted to serve images in parallel. It's easy to dismiss Web Developers as (X)HTML/CSS, Javascript & Php. In reality there are FAR, FAR, FAR more uses for programming competent Web Developers. Shoot, even having a good grasp of fundamental Computer Science helps with understanding Php's array's (or lack-thereof ;) ).

Now for practical usage? Sure Assembly is probably a waste of time. I still haven't found a real way to implement that wouldn't be. But the fundamental understanding of how a computer works, how decisions are made so close to the machine enhanced my understanding about programming which in turn enhanced my understanding about how I wanted to develop and engineer, even for the web.

My .02 cents anyway.


I am not so sure. To be a good web developer, you need to have a familiarity beyond css, js, html, and the framework you are using. You should be able to understand the underlying protocol, for example, what do the HTTP headers look like? How are they separated? How can SQL injection be dependent on header contents? You can unknowingly build a vulnerable web app if you don't understand the mechanics.

And in my mind, "serious about programming" extends beyond building web apps.


having a low level understanding of program flow can help in many situations. If you are an extremely lucky programmer, you might be able to live your life only in high level languages, but most that I have met had to get down in the plumbing eventually. Understanding asm will help with that.


I expect that there will be some kind of default OS released with the game, and that many alternatives will pop up very quickly.

It will not run Linux, because Linux requires a 32-bit CPU with megabytes of RAM. Someone might write a Linux-like system for it though.


Most likely it won't need anything other than some very basic firmware which would be part of the emulator.

It is unlikely to need any mechanism for managing drivers for example (unless you can build custom hardware). You might need some form of scheduler if you are running different programs to do different stuff (weapons , engines , navigation) but this could be something very simple and again baked into the emulator.

Some form of shell might be nice, but again not necessarily required.


You could run your spaceship computer like that, but you could also install a multitasking system that allows you to download files from your favorite space BBS while playing a game of solitaire and plotting the course to your next mining asteroid.


Notch suggested on twitter that there would be more than 1 CPU for each ship , so I imagine multitasking would work that way.


Not to mention a MMU. Of course that didn't stop someone from implementing an ARM emulator on an AVR and running Linux on it... http://hackaday.com/2012/03/28/building-the-worst-linux-pc-e...



Actually, uClinux doesn't require an MMU, and it's been merged into the main line of kernel development.


Reverse the polarity of the neutron flow. Works every time.


You would clearly have to reroute power from the primary core to the auxiliary shield systems which have been modified based on the tachyon attraction theory and amplified by shutting down unnecessary support systems on storage decks 3, 6, 11 and 42.


Somebody can write anti-virus software for the game and sell it!


This is covered fairly extensively in Neil Stephenson's Snow Crash. It even uses the same term for the persistent world "metaverse!"


Notch is playing God.

Well, actually that sounds like a recurring theme, now only taken to a more complex extent.


He won't stop viruses, but will those viruses be made illegal in UE?


Context:

For those who are puzzled (as I was) as to what this CPU is for, I found this:

> Notch's next game, 0x10c, "will allow players to create their own spaceship in the far, far future of an alternate universe ... More exciting - especially for those versed in programming languages - is the ship's computer. It's a fully-functional 16-Bit CPU that controls the entire vessel..." http://www.dealspwn.com/notchs-space-trading-game-real-calle...

Also: http://www.theverge.com/2012/4/4/2924594/minecraft-creator-n...

http://0x10c.com/


It's good to see Notch working on the Little Coder problem, even if that's not his direct intention. :)


Ugh. Googling Little Coder problem matches "small code problems" and "little codec problems" before something remotely related to what I wanted. There's a single relevant result in the first page.

The verbatim search returns a very good match in #1 and about 80% of the results seem appropriate. It seems the recent synonyms fixes are not nearly enough.

And, for the interested: http://blog.steveklabnik.com/posts/2009-12-28-the-little-cod...


Well, you actually wanted a synonym (Little Coder's Predicament not problem), just not the ones they appear to give.

For what its worth, _why's article is the third result for me, and a couple other articles on it are in the top 10.


Little Coder problem = how to get people interested in coding by writing little programs. Hard to do on consoles and iPhones.

I started thinking that if I was going to get people interested in coding, I'd start with something more approachable - maybe something like a subset of Python, Java, Ruby or even JavaScript (it's where so many people write little programs anyway now). I heard Lua mentioned as a good scripting language too.

Then I wondered - is this CPU a reasonable target for those kinds of little languages? Is this a way to be language independent and have them all? If so, why not something more like the JVM to make it easy?

It seems to me that coding for this CPU is more like a mini-game in the main game; the fact that it's a challenge is part of the game.


It is possible that someone will implement versions of those languages, but this CPU is so memory and speed constrained that you'd be hard pressed to get much useful done. And if the speed of your programs executing turns out to be relevant to success in the game, then people using an interpreted language will be at a possibly decisive disadvantage.

That said, I can see a nice friendly language becoming popular for non-time-critical tasks where ease of experimentation is most important. I just hope it's not BASIC. :)


This project begs for a BASIC interpreter.


Thanks for mentioning this. I was getting real confused about why someone would create a CPU/ASM for their game. Thought he'd then have to build the new game on top of that CPU and notch is crazy but that seemed a bit much..


It's not completely unheard of. State machines are pretty popular in FPS's, and the Infocom Z-machine code is still used (although upgraded quite a bit) for modern "interactive fiction"/text adventure games.


Just for fun, I wrote a disassembler for his instruction set. https://gist.github.com/2300590

I previously wrote some assembler routines for x86, which is very complex, working with Notch's design is a breeze and actually enjoyable.

Does he somewhere mention if the code is loaded into the RAM? This would make self-modifiable code possible.


It doesn’t say in the spec, unless you interpret the memory dump for his assembled example as residing at address 0x0000 rather than offset 0x0000. But I don’t see any reason why PC shouldn’t refer to a RAM address. It would be great for code economy in such a constrained system.


It's definitely loaded into ram. Some of the operands are defined in terms of "next word of ram", which clearly refers to reading (and incrementing) the PC.


Can't wait to see FORTH running on this CPU!

FORTH ?KNOW IF HONK ELSE FORTH LEARN THEN


That's my plan actually. It should be a lot easier to get going than C or an LLVM backend as others are suggesting.


Same here. The main design decision is how to represent the secondary stack. I was thinking we could reserve a pair of registers to keep the parameter stack pointer and return stack pointer, and swap them out with SP as needed.

  1 2 + >r
becomes something like

  SET PUSH, 0x1
  SET PUSH, 0x2
  SET A, POP
  ADD A, POP
  SET X, SP // back up data stack pointer
  SET SP, Y // switch to return stack
  SET PUSH, A
Thoughts?


My first try would probably be to use SP for one stack and some other fixed register for the other stack. Also, traditional Forth interpreters don't use code like you showed. I think the main benefit of using Forth (unless you just happen to love RPN) is the compactness of the interpreted code. If you're not going to use an interpreter, then I'm not sure if Forth is really a win.

Edit: Just to clarify about the interpreter point, I'd expect something like "1 2 + >r" to be represented at run time as four words in memory and there'd be an interpreter, written in machine code, reading code word sequences and jumping to small bits of machine code to either do a built-in function or push the call stack.


What you've described is a threaded Forth, which is indeed the most common type of implementation. I was thinking about writing a subroutine-threaded (no inner interpreter) Forth that inlines common primitives and does some peephole optimization. Not necessarily as compact, but much faster. Forth still provides a flexible, structured programming environment compared to raw assembly, to say nothing of Forth's metaprogramming facilities. I'd also say that a fair amount of Forth's compactness comes from the programming style- lots of tiny subroutines allowing extensive code reuse.


That's interesting. Sorry for not getting it right away. It's making more sense on second reading. :)

Now I'm curious how you'd encode subroutine calls. Your example was completely inlined but the call-sequence will be critical in determining how compact the code ends up.

Also, were you thinking of using a cross-compiler and keeping the dictionary out of the DCPU-16s memory? Let me know if you create a public repo. I'd be interested to see what you come up with!


No worries, I should've been more detailed in my original post- what I'm describing is definitely an atypical (if not unprecedented) Forth. I agree, the call-sequence is critical. My initial thought was to ensure that we always leave the stack pointers in "return stack mode" before calling, and return them to that state before exiting a routine. Thus, procedures:

  : A   dup * ;
  : B   2 A A 1 A ;
would look like:

  A:
    SET Y, SP
    SET SP, X // switch to data
    SET PUSH, PEEK
    SET A, POP
    SET B, POP
    MUL A, B
    SET PUSH, A
    SET X, SP
    SET SP, Y // switch to return
    SET PC, POP

  B:
    SET Y, SP
    SET SP, X // switch to data
    SET PUSH, 2
    SET X, SP
    SET SP, Y // switch to return
    JSR A
    JSR A
    SET Y, SP
    SET SP, X // switch to data
    SET PUSH, 1
    SET X, SP
    SET SP, Y // switch to return
    JSR A
As you can see, a little bulky, but doable. Optimizing stack operations to make better use of the registers can get rather nasty- still thinking about the best way to approach it. I'm definitely thinking in terms of a cross-compiler, and ignoring things like defining words for the sake of simplicity, at least at first. I'll drop you a comment if I get a prototype working.


Oh yeah, you could whip a sweet little Forth system up for this processor.

The first thing to do is to write a DCPU-16 assembler in Forth, and use that to write the primitives. That's pretty simple -- just look at the 6502 assembler: http://www.forth.org/fd/FD-V03N5.pdf

Some Forth systems have metacompilers so they can retarget themselves to different architectures. http://www.ultratechnology.com/meta.html

Using a metacompiler with a Forth DCPU-16 assembler would be the best way to go. Then you could easily experiment with different threading schemes, stack architectures, etc.


I will admit, I used to think Notch was just a regular-joe programmer who had gotten extremely lucky with his strain of adventure game.

Now I know I was dead wrong.


Emulating a very simple computer like this isn't hard at all; it's basically just a big array for memory, a few variables for registers, and a switch statement for instruction handling.


I think OP's point was not that Notch was a stellar programmer, but that he is a good game designer. (And that the popularity of Minecraft was a result of his skill and not of luck.)


But designing a good, minimalist one is reasonably hard.


I am just in awe. This guy has the guts to imagine shit like this and pull it off.


I guess we'll be able to write our own malware to take over other peoples' ships, then? Pretty awesome, just call me Locutus :)


the borg vs the binars.


This might be a stupid question but where does the output go? Is it just a memory ___location? And does that memory ___location map to a "ship function"?

So, er, on a PC if I write data to address 0x378 it'll appear on LPT1. What's the equivalent on DCPU-16? If I write data to a certain address it'll appear on the missile bay ports?

Or is there a level of abstraction?


If I had to guess, he's going to implement memory-mapped I/O. So, there's going to be another specification that lists various memory locations' functions. Reading from some will return information about the ship's systems; writing to other locations will control the ship.



For those interested, there is a C based implementation of Notch's DCPU-16 in github: https://github.com/swetland/dcpu16

I haven't tested it much, but it seems to work pretty well.


Here's some work (not mine) on a third-party implementation in Go :) https://github.com/kballard/dcpu.go/tree/master/dcpu


I wonder if it's just me but this game has me more excited about programming in ASM than I ever have, ever...

Also in this game I feel like I'll be playing the reverse role I play in real life. IRL I generally always advocate using things like Python and open sourcing everything. In this game I'm definitely going to pull a Steve Jobs, hire out a few friends of mine to program my ASM apps and sell them in some kind of Market for insanely marked up prices telling people that my Orange(TM) software is so much better than Macrosofts(TM)! (And it will be!)


When you sell it, however, you'll be selling the source. No pre-compiled binaries, here. Might be a bit difficult to charge for something people can trade for free.


No, the binaries may be easier to disassemble, but it's still machine code. I don't have to include the assembly source if I don't want to.


I created a subreddit for this interesting aspect of 0x10c if anyone is interested:

http://www.reddit.com/r/dcpu_16_programming/


I'm not a low level programmer, but could someone tell me if you could write a LLVM backend for this CPU? If one does that, could it not then work with many programming languages that support LLVM?

http://llvm.org/docs/WritingAnLLVMBackend.html


I've never written an LLVM backend, but one difficulty that immediately jumps out at me is the 16-bit architecture.

LLVM specifies types with bitwidths and 32 is most commonly used, meaning a backend would have to either emulate 32 bits, forbid the frontends from generating 32 bits, or silently cast them all down to 16 bits.


32-bit types are only common in LLVM because typically the target architecture is 32/64-bit. LLVM will handle 16-bit (and even 8-bit or 13-bit) targets just fine.

That said, a LLVM backend is overkill for this and I'm sure the simple interpreter Notch has rigged up will be just fine for its purpose.



I've been tempted to learn Verilog, looks like I might have a fun project to start here, trying to pipeline this. If I were writing the instruction set I'd be tempted to put all the value codes for next word or PC next to each other so it would be easier to figure out at decode state if the PC would be doing anything wonky, but that isn't a huge impediment or anything. The instruction boundary problem with regards to going super scalar looks as bad as it is in x86, but that's straying pretty far from the design goals.


I'd be interested to see details on memory protection. Self-modifying code means evolutionary programming, which I am very much for.

Conditional branching is strange.


I very much hope there will be some kind of memory protection mechanism. Virtual memory would enable wonderful things.


On the other hand, if there is no protected memory.... code wars abound!


I'm guessing someone is already working on a real-world version of it in an FPGA. Just for fun.


I'm not sure if it's well-specified. What does this do?

    JSR POP
Does the argument POP (or [SP++]) get evaluated before, or after, the "[--SP] <- PC" implicit in JSR?


Interesting question.

Spec says 'a is always handled by the processor before b, and is the lower six bits.'

For Non-basic opcodes, 'a' is actually the opcode, and b is the argument. This would imply JSR is evaluated before POP.

What we want JSR POP to mean, of course, is 'jump to the last item on the stack, then push PC+1 to stack'. So I would guess that's how it actually works, and the spec needs a revision.


I think that line is specific to the basic instructions only (if it weren't, "...and is the lower six bits" would be false).

"For Non-basic opcodes, 'a' is actually the opcode"

Actually it's the single argument -- 'o' is the opcode.


That compulses me to take Compiler class next semester, and write a C compiler for this precious fictional CPU.

And what about creating this cpu?


Already done in a few languages:

http://www.reddit.com/r/dcpu_16_programming/


I think so far there are only assemblers, not C compilers (or any high level languages)?


Ahh, I was only answering the last question.

Now that we have some emulators, people are working on compilers.


Very good subreddit but, I meant physically assembling a DCPU :) If not physically, with verilog or vhdl at least


I follow most of what's occurring within this document, but there's a bit that's throwing me and I hope someone can shed some light. When you're processing the step

SET A,B

Does A have to represent a register, or is it a memory ___location or is it both and the registry is just a ___location in ram?


a and b are just decoded value bits. That means they can be either one of those things. 0x00-0x07: registers themselves, 0x08-0x0f: addressing via register, etc.


Hey guys. If you're interesting in discussing this, head on over to #0x10c-dev on Freenode!


Is there an official assembler for this yet, or shall I roll my own?


No, but there's an emulator!

https://github.com/swetland/dcpu16

(Found on Reddit. I haven't tried it.)


I'm making a compiler/executor for it: https://github.com/mappum/DCPU-16.


I think he should just use PDP8 ISA, just 6 instructions, even stupid people can manage that much.


Notch's original intention was to implement a real CPU like the 6502, but he decided to create a CPU optimized for software emulation instead.

The PDP-8 was designed to be inexpensive to implement with the hardware available at the time. It would not be an ideal choice for this purpose.


Ready - steady - port tcc!


Ok. Start coding now please.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: