Hacker News new | past | comments | ask | show | jobs | submit login

True, but I suspect it'll be a lot easier to virtualise all those APIs through WASM than it is for a regular native binary. I mean, half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.

And the nice thing about that is you can pick which environment a wasm bundle runs in. Want to run it on the browser? Sure! Want to give it r/w access to some particular path? Fine! Or you want to run it "natively", with full access to the host operating system? That can work too!

We just ("just") need wasi, and a good set of implementations which support all the different kinds of sandboxing that people want, and allow wasm containers to talk to each other in all the desirable ways. Make no mistake - this is a serious amount of work. But it looks like a very solvable problem.

I think the bigger adoption problem will be all the performance you leave on the table by using wasm instead of native code. For all its flaws, docker on linux runs native binaries at native speed. I suspect big companies running big cloud deployments will stick with docker because it runs code faster.




As somebody who's in the process of building a sandbox for RISC-V 64 Linux ELF executables, even I'm still on the fence.

The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.

Meaning... no, I can't really just output WASM from Go or Rust and it'll work, there's more to it, much more to it.

With a RISC-V userland emulator I could compile that to WASM to run normal binaries in the browser, and provide a sandboxed syscall interface (or even just pass-through the syscalls to the host, like qemu-user does when running natively). Meaning I have high compatibility with most of the Linux userland within a few weeks of development effort.

But yes, threads, forking, sockets, lots of edge cases - it's difficult to provide a minimal spoof of a Linux userland that's convincing enough that you can do interesting enough things, but surprisingly it's not too difficult - and with that you get Go, Rust, Zig, C++, C, D etc. and all the native tooling that you'd expect (e.g. it's quite easy to write a gdbserver compatible interface, but ... you usually don't need it, as you can just run & debug locally then cross-compile).


> The problem is that in WASM-land we're heading towards WASI and WAT components, which is similar to the .NET, COM & IDL ecosystems. While this is actually really cool in terms of component and interface discovery, the downside is that it means you have to re-invent the world to work with this flavor of runtime.

At the application level, you're generally going to write to the standards + your embedding. Companies that write embeddings are encouraced/incentivized to write good abstractions that work with standards to reduce user friction.

For example, for making HTTP requests and responding to HTTP requests, there is WASI HTTP:

https://github.com/WebAssembly/wasi-http

It's written in a way that is robust enough to handle most use cases without much loss of efficiency. There are a few inefficiencies in the WIT contracts (that will go away soon, as async lands in p3), but it represents a near-ideal representation of a HTTP request and is easy for many vendors to build on/against.

As far as rewriting the world, this happens to luckily not be quite true, thanks to projects like wasi-libc:

https://github.com/webassembly/wasi-libc

Networking is actually much more solved in WASI now than it was roughly a year ago -- threads is taking a little longer to cook (for good reasons), but async (without function coloring) is coming this year (likely in the next 3-4 months).

The sandboxing abilities of WASM are near unmatched, along with it's startup time and execution speed compared to native.


I'm really eager to see what happens in the near future with WAT & WASI, but I'm also very aware of seeing a repeat of DLL hell.

There are a few niches where standardization of interfaces and discoverability will be extremely valuable in terms of interoperability and reducing the development effort to bring-up products that deeply integrate with many things, where currently each team has to re-invent the wheel again for every end-user product they integrate with, with the more ideal alternative being that each product provides their own implementations of the standard interfaces that are plugged into interfaces.

But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.


> I'm really eager to see what happens in the near future with WAT & WASI, but I'm also very aware of seeing a repeat of DLL hell.

I think we can at least say WebAssembly + WASI is distinct from DLL hell because at the very least your DLLs will run everywhere, and be intrinsically tied to a version and strict interface.

These are things we've just never had before, which is what makes it "different this time". Having cross-language runnable/introspectable binaries/object files with implicit descriptions of their interfaces that are this approachable is new. You can't ensure semantics are the same but it's a better place than we've been before.

> But, the reason I'm still on the fence is that I think there's more value in the UNIX style 'discrete commands' model, whether it's WASM or RISC-V I don't think anybody cares, but it's much more about self-describing interfaces with discoverability that can be glued together using whatever tools you have at your disposal.

A bit hard to understand here the difference you were intending between discrete commands and a self-describing interface, could you explain?

I'd also argue that WASM + Component Model/WASI as a (virtual) instruction set versus RISC-V are very different!


DLLs already run everywhere since CLR became cross platform.

Really this is walking an already trailed path, multiple times, we can even notice the parts grass no longer grows, how much it has been walked through.

https://en.m.wikipedia.org/wiki/UNCOL


The "universal compile target" facet of wasm is much less focal than the "universally embeddable" one.

The sandboxing is the keystone holding up the entire wasm ecosystem, without it no one would be interested in it same as nobody would run javacript in browsers without a sandbox (we used to, it was called flash, we no longer do).

I am curious why you focus so much on "universal runtime/compile-target do fail" rather than its actual strenght when at least in the case of java applet they failed because their sandbox sucked (and startup times).


Because WASM sandbox only works, to the extent hackers have not bothered attacking existing implementations to the same level as they did to Java applets, which is anyway one implementation among many since 1958 UNCOL idea.

Additionally, it is a kind of worthless sandbox, given that the way it is designed it doesn't protect against memory corruption, so it is still possible to devise attacks, that will trigger execution flows leading to internal memory corruption, possibly changing the behaviour of an WASM module.

> Nevertheless, other classes of bugs are not obviated by the semantics of WebAssembly. Although attackers cannot perform direct code injection attacks, it is possible to hijack the control flow of a module using code reuse attacks against indirect calls. However, conventional return-oriented programming (ROP) attacks using short sequences of instructions (“gadgets”) are not possible in WebAssembly, because control-flow integrity ensures that call targets are valid functions declared at load time. Likewise, race conditions, such as time of check to time of use (TOCTOU) vulnerabilities, are possible in WebAssembly, since no execution or scheduling guarantees are provided beyond in-order execution and post-MVP atomic memory primitives :unicorn:. Similarly, side channel attacks can occur, such as timing attacks against modules. In the future, additional protections may be provided by runtimes or the toolchain, such as code diversification or memory randomization (similar to address space layout randomization (ASLR)), or bounded pointers (“fat” pointers).

--> https://webassembly.org/docs/security/

Finally, WASM is only as secure as its implementations, whatever the bytecode promises only matters if the runtimes aren't exploitable themselves.


> The sandboxing abilities of WASM are near unmatched, along with it's startup time and execution speed compared to native.

Could you expand on this? I think everyone would agree with the first two of these - sandboxing is the whole point of WASM, so it would be excellent at that. And startup latency matters a great deal to WASM programs, again not surprised that runtimes have optimised that.

But execution speed compared to native? Are you saying WASM programs execute faster than native? Or even at the same speed?


Ah this could have been clearer -- the context is userland emulation (and I expand that to broadly mean emulation/VMs and even containers -- i.e. the current group of options). It's not that Wasm is likely to run faster than native, it's that it runs reasonably close to native speed when compared to the other options.

Separately, it also matters what you consider "native" -- it is possible to write programs in a more efficient language (ex. one without a runtime), apply reasonable optimizations, and with AOT/JIT be faster than what could be reasonably written idiomatically in the host language (e.g. some library that already exists to do X but just does it inefficiently).


> the downside is that it means you have to re-invent the world to work with this flavor of runtime.

This is at least one of the reasons we've been building thin kernel interfaces for Wasm. We've built two now, one for the Linux syscall interface (https://github.com/arjunr2/WALI) and one for Zephyr. A preliminary paper we wrote a year or so back is here (https://arxiv.org/abs/2312.03858), and we have a new one coming up in Eurosys 25.

One of the advantages of a thin kernel interface to something like Linux is really low overhead and low implementation burden for Wasm engines. This makes it easier to then build things like WASI just one level up, compiled against the kernel interface and delivered as a Wasm module. Thus a single WASI implementation can be reused across engines.


> One of the advantages of a thin kernel interface to something like Linux is really low overhead and low implementation burden for Wasm engines.

Such a low burden that both Google (gVisor) and Microsoft (WLS1) failed at it!


A thin kernel interface isn't a reimplementation of a kernel. The WALI implementation in WAMR is ~2000 lines of C, most of which is just pass-through system calls.


Okay, so you mean forwarding the syscalls, not implementing them, and thus throwing away the wasm sandbox.


It does not throw away the Wasm sandbox. Sandboxing means two things: memory sandboxing and system sandboxing. It retains the former. For the latter you can apply the same kinds of sandboxing policies as native processes and achieve the same effect, or even do it more efficiently in-process by the engine, and do interposition and whitelist/blacklisting more robustly than, e.g. seccomp.


Alright, selectively forwarding the syscalls, now you're approaching the problem again where you need to reimplement parts of Linux to understand the state machine of what fd 432 means at any given point in time etc; basically you're implementing the ideas of gVisor in a slightly different shape, without being able to run preexisting binaries. Doesn't seem like a useful combination of features, to me.


> you need to reimplement parts of Linux

Again, no. The security policies we have in mind can be implemented above the WALI call layer and supplied as an interposition library as a Wasm module. So you can have custom policies that run on any engine, such as implementing the WASI security model as a library. As it is now, all of WASI has to be implemented within the Wasm engine because the engine is the only entity with authority to do so. That's problematic in that engines have N different incompatible, incomplete and buggy implementations of WASI, and those bugs can be memory safety violations that own the entire process.

Thin kernel interfaces separate the engine evolution problem from the system interface evolution problem and make the entire software stack more robust by providing isolation for higher-level interfaces.


To filter out syscalls for complex policies, you need to understand the semantics of prior syscalls. For example, you need to keep track of what the dirfs in an unlinkat call refers to. And to keep track of FDs you need to reimplement fcntl. And so on.

This is why gVisor contains a reimplementation of parts of Linux.


Yes, but the engine doesn't need to do this, you can do this on your own time as a library. As there are literally dozens of Wasm engines now, thin kernel interfaces are a stable interface that they can all implement in exactly the same way[1] (simple safety checks + pass through) and then higher-level, more safe, and in some way better policies and APIs can be implemented as Wasm modules on top.

[1] This makes the interface per-kernel, not per-kernel x per-engine. It's also not per-kernel x per-kernel; engines would not be required to emulate one kernel on another kernel.


Oh yes, let's delegate the hardest part back to the caller! Surely nothing will go wrong.

Try writing a seccomp policy for filesystem access (that isn't just 100% yes/no). That's how hard this thing will also be to use.


> let's delegate the hardest part back to the caller!

Obviously, an expert would write the security policies and make them reusable as libraries. Incidentally, that is what WASI is--it's not only a new security model, but a new API that requires rewrites of applications to fit with the new capability design.

> Try writing a seccomp policy for filesystem access

Try implementing an entire new system API (like WASI) in every engine! You have that problem and a whole lot more.

For comparison, implementing WASI preview1 is 6000 lines of C code in libuvwasi--and that's not even complete. Other engines have their own, less complete and broken, buggy versions of WASI p1. And WASI p2 completely upends all of that and needs to be redone all over again in every engine.

Obviously, WASI p1 and p2 should be implemented in an engine-independent way and linked in. Which is exactly the game plan of thin kernel interfaces. In that sense, at the very least thin kernel interfaces is a layering tool for the engine/system API split that enhances security and evolvability of both. Nothing requires the engine to expose the kernel interface, so if you want a WASI only engine then only expose WALI to WASI and call it a day.


WASM approach to injecting the host-interaction API seems to me to be similar to what EFI does. You are provided with a table full of magical functions on startup, and that's how you can interact with the host. Some functions weren't provided there? Tough luck.


As someone who has written a RISC-V sandbox for that purpose, I say stay the course. We need more competition to WASM. At the end you'll find that register machines make for faster interpreters than Harvard architectures. You can have a look at libriscv or message me if you need any help.

Source: https://libriscv.no/docs/performance/


This assumes that everyone implements the same set of APIs that work in the same way.

More likely, the browser will implement some that make sense there, some browsers will implement more than others, Cloudflare workers will implement a different set, AWS Lambda will implement a different set or have some that don't work the same way... and now you need to write your WASM code to deal with these differing implementations.

Unless the API layer is, essentially, a Linux OS or maybe POSIX(?) for Docker, which I doubt it would be as that's a completely different level of abstraction to WASM, I don't have a lot of faith in this being a utopian ideal common API, given that as an industry we've so far failed almost every opportunity to make those common APIs.


Good point! This is the hard work that people are undertaking right now.

Things are going to change a little bit with the introduction of Preview3 (the flagship feature there is async without function coloring), but you can look at the core interfaces:

https://github.com/WebAssembly/WASI/tree/main/wasip2

This is what people are building on, in the upstream, and in the bytecode alliance

You're absolutely right about embeddings being varied, but the standard existing enforces the expectations around a core set of these, and then the carving out of embeddings to support different use cases is a welcome and intended consequence.

WASI started as closer to POSIX now, but there is a spectacular opportunity to not repeat some mistakes of the past, so some of those opportunities are taken where they make sense/won't cause too much disruption to people building in support.


CORBA, DCOM, RMI, .NET Remoting, Tcl Agents,... but this time it will be better.


It isn't the fault of the group that suggests and standardizes protocols, it is everyone thinking they are smarter and they can do it better is the problem.


Especially when ignoring prior art, and why it eventually went away.


How is Wasm ignoring prior art? Like what mistakes have been made that were already known about before? Genuinely curious.


> the flagship feature there is async without function coloring

Correct me if I’m wrong, but that’s only possible if you separate runtime threads from OS threads, which sounds straightforward but introduces problems relating to stack-lifetimes in continuations so it introduces demands on the compiler and/or significant runtime memory overhead - which kinda defeats the point of trying to avoid blocking OS threads in the first place.

I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications.

…but if WASI is simply adding more options without taking anything away then my point above is moot :)


Further to this, my (very basic) understanding is that the actual threading implementation will be left up to the integrator, so some implementations may not actually implement any concurrency (a little like the Python GIL in a way), while others may implement real concurrency, therefore meaning that subtle threading bugs could be introduced that wouldn't be seen until you run in other environments.


> Correct me if I’m wrong, but that’s only possible if you separate runtime threads from OS threads, which sounds straightforward but introduces problems relating to stack-lifetimes in continuations so it introduces demands on the compiler and/or significant runtime memory overhead - which kinda defeats the point of trying to avoid blocking OS threads in the first place.

Correct -- note that the async implementation does not address parallelism (i.e. threading) -- it's a language +/- runtime level distinction.

The overhead is already in the languages that choose to support -- tokio in rust, asyncio in python, etc etc. For those that don't want to opt in, they can keep to synchronous functions + threads (once WASI threads are reimagined, working and stable!)

You can actually solve this problem with both multiple stacks and a continuation based approach, with different tradeoffs.

> I’m not belittling the achievement there - I’m just saying (again, correct me if I’m wrong) there’s a use-case for function-colouring in high-thread, high-memory applications. > > …but if WASI is simply adding more options without taking anything away then my point above is moot :)

Didn't take it as such! The ability to avoid function coloring does not block the implementations of high-threads/high-memory applications, once an approach to threading is fully reconsidered. And adding more options while keeping existing workflows in place is definitely the goal (and probably the only reasonable path to non-trivial adoption...).

How to do it is quite involved, but there are really smart people thinking very hard about it and trying to find a cross-language optimal approach. For example, see the explainer for Async:

https://github.com/WebAssembly/component-model/blob/main/des...

There are many corners (and much follow up discussion), but it's shaping up to be a pretty good interface, and widely implementable for many languages (Rust and JS efforts are underway, more will come with time and effort!).


True. Its probably worth creating a validation suite for wasi which can check that any given implementation implements all the functions correctly & consistently. Like, I'm imagining a wasm bundle which calls all the APIs it expects in every different configuration and outputs a scorecard showing what works properly and what doesn't.

I suspect you're right - unless people are careful, it'll be a jungle out there. Just like javascript is at the moment.


I understand your point but sadly I think it's too idealistic (not that we shouldn't strive for these goals). We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities.

That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.

Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.


> I understand your point but sadly I think it's too idealistic (not that we shouldn't strive for these goals). We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities. >

I think the browser problem is a marketshare/market power problem, and Wasm doesn't have that problem.

Also, I'd argue that compat tests for JS engines and browsers are an overall positive thing -- at least compared to the world where there is no attempt to standardize at all.

> That also assumes that everyone involved wants compatibility, and that's unlikely. Imagine a world where every WASM implementation is identical. If one implementation decides to change something to implement an improvement to differentiate themselves in the market, they'll likely win marketshare from the others.

This is a good thing though -- as long as it happens without breaking compatibility. Users are very sensitive to changes that introduce lock-in/break standards, and the value would have to be outsized for someone to forgo having other options.

> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.

I think you can see this playing out right now in the Wasm ecosystem, and it isn't working out like you might expect. There are great benefits in building standards because of friction reduction for users -- as long as there is a "standards first" approach, people overwhelmingly pick it if functionality is close enough.

Places that make sense to differentiate are differentiated, but those that do not start to get eaten by standards.

I think organizations that are aware of this problem and attempt to address is directly like the Bytecode Alliance are also one of the only forms of bulwark against this.


> I think the browser problem is a marketshare/market power problem, and Wasm doesn't have that problem.

No, it really isn’t.

For more than the last two decades every browser bar IE looked towards compatibility and only included differences as browser-specific extensions.

And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.


Maybe I'm just not understanding, but I'm not sure how this precludes it being a marketshare problem -- the thing is that the marketshare leader doesn't have to worry about compatibility/being interoperable.

> And even when Microsoft eventually caved and started the Edge project to create a compatible browser, they ended up admitting defeat and pivoted to Chromium themselves.

This can be interpreted as a problem of marketshare not staying balanced. It may have shifted hands, but the imbalance is the problem -- if Chrome had to deal with making changes that would be incompatible with half the users that visit sites on Chrome, they'd be forced to think a lot more about it.

This doesn't mean they can't add value in the form of non-standardized extensions -- that's not a desirable goal because it would stifle innovation. The point is that at some point if users are on browser Y and they get a "this site only runs on browser X", they're just not going to visit that site, and developers are going to shy away from using that feature. In a world with lopsided marketshare, there's not much incentive for the company with the most marketshare to be interoperable.


IE hasn’t been the market share leader in a long time and couldn’t even retain compatibility with itself, let alone any ACID tests nor wider formalised standards.

And these days the problem is simply that the specifications are so complex and fail mode so forgiving that it’s almost impossible for two different implementations to output entirely the same results across every test suite.

Neither of these are market leader problems. The former is just Microsoft being their typical shitty selves. While the latter is a natural result of complex systems designed for broad use even by non-technical people.


fair point -- I meant the IE -> Chrome shift skipped over a world where more browsers held more equal share.

Agree on the other points though, market share is clearly not the only problem!


> We already have those sorts of tests for browsers, and browser compatibility is still a problem. We have acceptance tests for other areas, like Android's CTS tests, but there are still incompatibilities.

Yeah - but its barely a problem today compared to a few decades go. I do a lot of work on the web, and its pretty rare these days to find my websites breaking when I test them on a different web browser. That used to be the norm.

I think essentially any time you have multiple implementations of the same API you want a validation test suite. Otherwise, implementation inconsistencies will creep in. Its not a wasm thing. Its just a normal compatibility thing.

Commonmark is a good example of what doing this right looks like. The spec is accompanied by a test suite - which in their case is a giant JSON list containing input markdown text and the expected output HTML. Its really easy to check if any given implementation is commonmark compliant by just rendering everything in the list to HTML and checking that the output matches:

https://spec.commonmark.org/

> Most companies implementing WASM will tend to want to a) control the spec in their own favour, and b) gain advantages over other implementations. And this is why we can't have nice things.

Your cynicism seems miscalibrated. We have hundreds of examples of exactly this kind of successful cross-company collaboration in computing. For example, at the IETF you'll find working groups for specs like TCP, HTTP, BGP, Email, TLS and so on. The HTTP working group alone has hundreds of members, from hundreds of companies. WhatWG and the W3C do the same for browser APIs. Then there's hardware groups - who manage specs like USB, PCI / PCIe, Bluetooth, Wifi and so on. Or programming language standards groups.

Compatibility can always be better, but generally its great. We can have nice things. We do have nice things. WASM itself is an example of that. I don't see any reason to see these sort of collaborations stopping any time soon.


I have a funny feeling that some day we'll see Docker ported to wasm to "abstract it away cleanly". History is a circle and all that.


> half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.

This is a serious misunderstanding of how containers work.

Containers make syscalls. The Linux kernel serves them. Linux kernel has features that let one put userspace processes in namespaces where they don't see everything. There is no "routing". There is no "its own filesystem and network", just a namespace where only some of the host filesystems and networks are visible. There is no second implementation of the syscalls in that scenario.

For WASM, someone has to implement the server-side "file I/O over WASI", "network I/O over WASI", and so on. And those APIs are likely going to be somewhat different looking than Linux syscalls, because the whole point is WASM was sandboxing.

Quite far from "pretty easy".


> True, but I suspect it'll be a lot easier to virtualise all those APIs through WASM than it is for a regular native binary. I mean, half the point of docker is that all syscalls are routed into an LXD container with its own filesystem and network. It should be pretty easy to do the same thing in userland with a wasm runtime.

All of this sounds too good to be true. The JVM tried to use one abstraction to abstract different processor ISAs, different operating systems, and a security boundary. The security boundary failed completely. As far as I understand WASM is choosing a different approach here, good. The abstraction over operating systems was a partial failure. It succeeded good enough for many types of server applications, but it was never good enough for desktop applications and system software. The abstraction over CPU was and is a big success, I'd say.

What exactly makes you think it is easier with WASM as a CPU abstraction to do all the rest again? Even when thinking about so diverse use-cases like in-browser apps and long running servers.

A big downside of all these super powerful abstraction layer is reaction to upstream changes. What happens when Linux introduces a next generation network API that has no counterpart in Windows or in the browser. What happens if the next language runtime wants to implement low-latency GC? Azul first designed a custom CPU and later changed the Linux API for memory management to make that possible for their JVM.

All in all the track record of attempts to build the one true solution for all our problems is quite bad. Some of these attempt discovered niches in which they are a very good fit, like the JVM and others are a curiosity of history.


docker and lxd are competing projects. Docker does not use lxd to launch containers. lxd was written by the lead dev (at canonical) of lxc which was not as polished as docker but sort of kind of did the same thing (ran better chroots)

They both use Linux kernel features such as control groups and namespaces. When put together this is referred to as a container but the kernel has zero concept of “a container”.


Docker started using an LXC driver to run workloads, but it was deprecated 10 years ago. No LXC remaining there :P


Cool but lxd is still a competitor to docker. I’ve read the move code for some work reasons. No lxc, as you said :)


Late reply but sorry, I was trying to reply to parent coment, that was mixing docker and LXD. Anyway, I don't see them as competitors. Although they do (in the core) a similar thing, I use them both for different tasks: LXD more similar to classic virtualization, and Docker for more tightly packaged things that are separated from its configuration and/or storage.


> It should be pretty easy to do the same thing in userland with a wasm runtime.

Not easy and certainly not fast.


When did docker started using LXD ? i never knew.


never, they GP is wrong




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: