Considering the current evolution of automation and the global forecast of high unemployment rates, that will be a problem for the whole economy (with Bill Gates suggesting a tax on robots), what are your thoughts about education? We currently educate kids based on a system that assumes the current jobs are going to be there in 15-20 years and for the rest of their lifes. Can startups and technology help in improving education considering the future world considering the current pace of automation in most fields?
I love these kind of articles. For some people it might be a waste of time, but to me is really good to see such amazing willingness to discover the problem. I am tempted to get a pair of AirPods, not totally sure yet, but if in the future I will get one, I will definitely remember this post in case I have troubles with them.
I am big fan of Swift and Scala... and, to be honest, for Swift there's a long way ahead before a big system can be deployed... tooling and libs are not there, also not considering that the lack of ABI stability makes the source code necessary for any external dependency.
It was 22 fields and the problem was not serializing/deserializing, but it was a problem with the Scala functions and tuples system not supporting case classes with more than 21 fields. This limitation has been removed since Scala 2.11.
I guess their conclusion is that this is the maximum size for the goal they want to achieve. Can it be bigger without having an impact on the quality of the event?
The big issue right now is funding of replicated research, who wants to fund a research to prove someone else was right? Most of these funds are granted based on potential outcome of the new discovery like: potential business, patents, licenses, etc... not being the first one would probably wipe most of these benefits, cutting down to a small probably getting funded...
Now, straight to the point, who's going to pay for the repeated research to prove the first one?
On a low level, I think it should be mandatory for masters students to do a pre-thesis project, which is replicating findings in a published paper.
It would do something about low hanging fruit in terms of testing reproduceability and since there is a published paper, the student has access to guidelines for setting up and reporting on a large project, which will help them learn how to do their own, original thesis.
I had my Masters students do this as part of my wireless networking class this year. It was very instructive for me and the students seemed to enjoy it, so I'll definitely keep it in the syllabus.
> Now, straight to the point, who's going to pay for the repeated research to prove the first one?
Who is and who should are two different questions. The body who funded the original research should be best placed to fund the verification of the research. If the research isn't compelling enough to fund verification then why was it funded in the first place? And if the principle research group is requesting additional funding for more research that builds on the initial unverified research then that sounds like poor governance.
I realise that this simplistic view gets messy and confused when research is really academic led innovation and incubation.
This works for some "controversial" researches, which can be dismantled. What about successful (or apparently) ones which may lead to tons of $$$ in return? Is there any value to pay for a research which is going to prove someone else was right ending in making their wallets fatter?
Sorry for the informal language, but makes things a little bit more salty.
If someone is making money from it, someone else is (most likely) losing money from it as well.
When the lightbulb was invented, Edison made lots of money, but I am sure candlemakers had plenty of incentive to fund research that hypothesized that lightbulbs emitted toxins.
Yes, with Postgres (which I am most confident to talk about), most precisely using PostGIS, you can do that in a matter of hours using it for geo-queries and indexing for getting important stuff (new, trending, etc...). Plus Postgres is supported basically everywhere in any tech stack. I still don't get one point, why people totally ignore SQL dbs by default with new products? I know MongoBD, RethinkDB, CouchDB, etc... are really fascinating solutions, but why not considering SQL eliminating it by default? I am just curios.
That's only needed for the coordination... that said, it would be easy enough to segment channels based on a certain precision of geohash... messages sent target 9 channels, your current and neighboring channels... you subscribe to the channel you are in, and this updates every N seconds.
Channel position/calculation can happen client side, and subscribe/unsubscribe can happen server-side. Though that may leave room for unscrupulous behavior, it could be locked down a bit more by moving sub/unsub server-side.
The issue will be growth/routing/rerouting of channel data... even then, you can get pretty far with RabbitMQ backed socket.io ... you might need to custom create something before hitting 10M simultaneous users, which at current growth rate would be an issue anyway.
I think its because the only really viable scaling option for Postgres is vertical scaling. Even just setting up any sort of replication with automatic failover is still a pain (multimaster is not yet built in, master-slave also needs 3rd party failover program...)
Replication with automatic failover? I'd go for it immediately, unless you are okay with long downtime and some data loss in case the server goes down.
But if you can live with that, then yes, you're unlikely to have actual scaling problems - at least not for projects like the OP.