Hacker News new | past | comments | ask | show | jobs | submit login
Simple web-focused Docker-based mini-PaaS server (octohost.io)
71 points by akoenig on July 25, 2014 | hide | past | favorite | 31 comments



Hi.

Let's pretend that I'm an intermediate developer with no knowledge about sysadmin or even deployments. How would you break this down and ELI5 [0].

How does Packer fit into all of this (assuming I'd use Digital Ocean). What's Vagrant for? What does Virtual Machine do here? Do I need these three machines on the target VPS or only on my local development machine? To add my DO keys do I need to ssh into vagrant once it's up and running?

You provide the octovagrant box, is it secure? Is Vagrant production-ready or is it not part of the production mix? You've got 6 cookbooks listed in the cookbook repo. Do I need all of them? How do I use/install each of them?

What does Docker do? Once I've done all of this setup work, how can I push up all of my code to the desired VPS? Do any of the defaults have security provisions and set up ufw rules to only allow port 80 etc [1], disallow root access, only allow ssh access and all of that goodness? If I use this over manually provisioning and securing servers, do I get sane and secure defaults?

That's a lot of questions, but I may not be the only one asking them, so if I may so bravely ask, ELI5?

[0] Explain like I'm 5 - like http://www.reddit.com/r/explainlikeimfive [1] Just suspend disbelief about my zero-knowledge about setting up servers


octohost is like a mini personal Heroku.

Packer is a tool that helps to build the image that octohost runs on. It installs all of the software needed and prepares the VM it runs on.

Vagrant is a tool that allows you to run different virtual machines on your local development box. It has nothing to do with production at the moment - it's just for running it locally.

The octovagrant box is pretty open - but that's because it's for running things locally. When it's installed on AWS/DO/Rackspace/Linode/etc. it's firewalled from remote people - but still is pretty open internally. I wouldn't let untrusted people push to it at the moment.

Yes - you need all of the cookbooks, but Packer will take care of that for you - you don't have to really worry about that.

If you're just using the AWS AMI that we've already built - then you can really ignore Packer and Chef - just launch the AMI and you're done.

Docker allows you to run processes inside of a container. So you can launch a set of processes from your source code and have it run on its own.

Once it's setup, you merely add a remote git target, push your code to the server and it builds and launches the code that you have pushed. It works like Heroku for simple websites.

That help at all?


Yeah - currently deploying it from scratch is a little too hard for most people. I'd like to fix that, but just haven't had the time.

I'd honestly love a system that deploys the image for you to the cloud of your choice - with setup for ssh keys and things to make it simple.

Just not in the cards timewise at the moment.


FYI not everyone wants to run this in "the cloud". I have a LOT of bare metal and want to deploy something like this on bare metal. Please don't forget us users who buy our own hardware!


Same here. I'm waiting for clean and nie solution to utilize all spare hardware, I have in my company. It would be great to have simple PaaS running on one machine, it would be awesome to utilize them all. If someone could give me step-by-step instructions / solution how to do this, I would really pay for that (or support open source, if that's the case).


The knife solo page should help with this:

http://www.octohost.io/knife-solo.html


I don't think so. Why would I need vagrant on the server ? ~/octohost-cookbook$ sudo rake knife_solo user=root ip=172.16.90.151 rake aborted! Kitchen::UserError: Vagrant 1.1.0 or higher is not installed. Please download a package from http://downloads.vagrantup.com/.


Checkout CoreOS + Deis


You don't need a cloud. You can deploy it on whatever hardware you have as long as you have an SSH connection:

http://www.octohost.io/knife-solo.html


Is this only for websites? I'm assuming it just executes the dockerfile, and uses the run command. I have set up a hacky bash script myself to do this, but also use it for daemons which don't need to expose any ports.

The theory of operation page hints that if no ports are exposed, then the container isn't launched. It would be great if the following would be possible:

1) No ports exposed. Just run some software and that's it.

2) Expose one or multiple http ports with different domains for each.

3) Expose one or multiple tcp/udp ports, which get mapped directly to a host port.

I also can't see if there's any support for volumes, but if not, that also seems fairly important.

For what it's worth, here[1] is how I handled it, but the project is very sloppy and I do not recommend it's use to anyone since I'm looking to switch ;)

[1] https://github.com/r04r/dockah/blob/master/dockah.sh#L35 (reads a config file like https://gist.github.com/r04r/d5d0ea6506824e2cf6d9)


This is mainly focused on websites, but you can launch other items that don't require any ports exposed:

http://www.octohost.io/data-stores.html

I added some "magic" Dockerfile comments - this is the one to add in order to not expose anything via HTTP:

NO_HTTP_PROXY

I am not a huge fan of volumes where data is stored on the box - but you can do it the same way I've described on the page.

I think of octohost boxes as app servers - if you have data on it, you should likely have it stored elsewhere. I've used Heroku to store additional data sometimes:

http://blog.froese.org/2014/03/17/using-octohost-with-heroku...

We've also used remote MySQL servers:

https://github.com/octohost/mysql-plugin

There's lots of ways to do it.


Could you elaborate on what you are using chef for and why you didn't use something like fig.

I'm getting a bit overwhelmed by the number of meta tools around docker deployment.

Also, and most importantly, how are you handling logging? Is it being persisted on the host volume or is rsyslog-streamed


Chef is being used to build the VM and put the entire thing together. But after it's built - you never need to use Chef again.

Fig didn't exist when I started this and I wanted to mirror the git push that Heroku used.

Logging is handled by Docker - and can be sent to services like Papertrail if you use something like logspout:

http://blog.froese.org/2014/05/15/docker-logspout-and-nginx/


Finally allowed back in - HackerNews locked me out because I was "Submitting too fast." Needed more karma.


How does that compare to dokku?


Dokku was the inspiration for it. I wanted a simple way to deploy small websites and I really wanted to understand how it all worked together - so I built my own.

One major difference (like already mentioned) is that I wanted to use Dockerfiles rather than Procfiles. I wanted to have full control over how it built and ran - and I didn't want to go through slugbuilder / buildstep at the time.

It may actually be easier on end users to use Heroku buildpacks and abstract some of the magic - but for our uses Dockerfiles were the best fit.

Now, the Tutum team came out with something that even simplifies it even more - and it might be worth looking at:

http://blog.tutum.co/2014/04/10/creating-a-docker-image-from...

I have had that on my backlog for several months and just haven't had time.


Thanks for the mention! I'm part of the Tutum team, and we actually made an open source project called Boatyard.io

It's basically a web UI that builds images from tarballs, dockerfiles, and github repos. We use it internally all the time and we thought we'd contribute back.

You can find our post about it here:

http://blog.tutum.co/2014/05/26/introducing-boatyard/

Great job on octohost.io!


As far as I know it is completely Dockerfile-based, whereas dokku utilizes Procfiles. Therefore you could build and start any Docker container you like.


You should add comments to your example to explain what each step is doing.


If you're talking about this:

http://www.octohost.io/screencast.html

Then I have detailed what happens more thoroughly here:

http://www.octohost.io/theory-of-operation.html

That help?


I meant your "Advanced quickstart with AWS" on the front page. I can follow what it does, but I expect it's not obvious to many people.


There's an entirely explained version here:

http://www.octohost.io/aws-install.html

I should link to it from the homepage - that will be added shortly.


Does it offer similar features to Dokku or is it more like Deis?


It is more like Deis. A miniature version :)


That's the goal - but I was working on this on weekends mostly by myself and I think they have a team working on it full time.

Deis is a much more mature choice and is based on CoreOS which is awesome.


I have to ask. Why isn't it installed via Docker?


At the time we started, I wasn't happy with running some of the base items inside of Docker.

I'd likely change that today if I was starting now - but that was in November 2013 when Docker was much more beta.


why the move from ansible to chef?


I'm curious about this as well – for the most part you usually hear about people having moved to the other way, from Chef to Ansible (personally, still using both).

Ansible also (currently) seems a bit more suited for the Docker workflow, so I'd be interested to hear about the developer's experiences with this.


I have been using Chef for a number of years.

I started this project with Ansible because I wanted to learn how to use it.

As I went along, I wanted to learn more about TDDing Chef cookbooks with Test Kitchen so I switched back to Chef.


Because Chef is awesome? (no real answer here, just a fan)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: