The thing to remember is that those directions are used for an awful number of things that have nothing to do with the ship moving. Like when you name places for storing things or various pieces of equipment. Changing those names would create a lot of mayhem.
When I captain a boat it is more important for me to communicate things unequivocally. I need to be able to specify what I want to happen, precisely. The crew does not really need to concern themselves with the direction the boat is moving or what is the bigger plan. They just get orders to do specific tasks. Only I (or whoever is responsible for the boat at the moment) really need to understand the context of those tasks although ideally the crew would also understand why they are doing things so that they can anticipate further orders or signal when I make a mistake.
Why this is important is because the plan may sometimes change and then my job is to put a new set of orders and I can't really start by explaining what the new changed plan is or getting any consent.
I was never in a situation like that, but if I was captaining a boat that can change directions, I would still keep stable naming that does not change when the boat stops or changes directions.
Though, as it turns out, double-ended craft are defined by the direction they are operating.
I'm curious on other items that are defined in this way. Is does make sense that broad rules are lifted for obvious reasons. Is why I can name a few things where screw threading is reversed from what we typically use.
Yep. For example, I use control theory to keep my services at just single percents below maximum throughput achievable on the server. Then I use other tricks (like batching processing) to make the application MORE efficient as the traffic increases.
The end effect is I can just back off 1-5% off the maximum throughput and keep the service there running happily.
I would like to use this occasion to point out that all the discussion about unused CPU is at this time completely pointless.
Most services I have seen waste ORDERS of magnitude by being inefficient. Rather than focusing on trying to saturate the CPU and other resources it is almost always better to just make your application more efficient. That last 30% should be a cherry on top.
Batching (and sorting and merging) are things our predecessors in the 1950s and 1960s (and before that, in the card era) had to do to run anything at all. These days they are things that we may do to make sluggish systems snappy.
You know how many systems have "performance" configuration? I use a controller that monitors the state of the system and changes these parameters in real time to regulate system to stay within desired state when the environment of the system changes.
As a very simplified example, imagine a backend service that is being called by external customers and does not control how those customers are calling the service. I can add a delay to each response and I can have even something as simple as PID controller regulate the CPU usage by changing the dalay. Larger delay will usually cause the clients to slow down requests (requests being usually a result of previous request completing). This is simple and naive example but this is more or less what I do.
(Of course, in reality, it is much better to just have a backpressure mechanism and whenever possible you should use one rather than try to work around HTTP inadequacy. But you can't always do it, especially if you have a public API.)
I also typically have lots of other controllers. For example something that regulates memory usage by limiting transactions in flight or something that regulates latency as seen by priority clients or database replication rate/delay, or error rates or a bunch of other parameters.
I also routinely take care of babysitting downstream systems like databases or other APIs. I may have a regulator that will automatically start backing off certain types of traffic as a response to increasing error rates or latencies in a downstream system. All this because those downstream systems are usually shit and not designed to deal with overload and it is easier for me to deal with this proactively than do what everybody else does -- keep bugging those people to fix their issues when their evidently don't know how.
I have been trying to move away from dumb rate limiting to a more holistic approach that allows us to make smarter decisions with traffic. Your overview made me intrigued.
Do you have any references you like to use? I am looking at the Wikipedia page, but it's so removed from practical aspects.
I don't. There simply isn't any tooling or literature to speak of. I have some experience using control engineering in my electronics projects and that's how I came up with the idea to use it for backend systems. I have researched and developed everything myself. I have used "Modern Control Engineering" by Katsuhiko Ogata, but really, mostly I just learned from the Internet.
My initial motivation was to remove configuration. I have found, historically, that giving people options to configure very complex software more often than not results in problems, especially after original developers leave. More often than not these new people will not understand the implications or interactions between various settings and this will just cause problems. So my aim became to remove any options from the software and make sure it can perform autonomously and recover from wide range of, possibly unknown, situations. Which is exactly what control engineering is about if you think about it!
If one day you'll write a blog post / article about what you're doing, it'd be interesting to read :- )
(What if you start collecting email addresses to people who want to read such an article? And if one day you write one, then you can email them? — My email is in my profile, if you'd like to add it to such a list)
Dunno... most people I meet seem to be put off by my software development ideas. I stick to them because they seem to work very well even if it initially creates a lot of friction between me, the team and the management.
Where to start... I think test driven development and unit testing is not giving promised value and instead wastes time and makes software more difficult to refactor and I think functional end to end testing to be much more effective and cost effective. I think code reviews are bad because they don't deliver on promised value and individual craftsmanship (peoples ability to deliver on their own) and pair programming are better. I think microservices to be a wrong approach for 99.9% projects and fixed a bunch of projects by rolling the software into monolyths. I believe bugs can only be truly reduced by taking responsibility for writing correct code in the first place and anything afterwards is expensive and not effective (you can only remove bugs that manifest themselves, everything else stays). I don't compile/run my code multiple times a day -- I write it all in one go, sometimes for weeks, then run it. If it works it means I know what I am doing and if it doesn't -- it is the failure of my process. Where most devs just fix the bug and restart the app I will start an investigation into why my process failed and how I need to fix it -- NTSB-style. I believe that nobody understand what Agile is and the way it is applied is damaging to software industry. I don't believe in linear development progress -- I design my apps top down and at the same time program them bottom up until top down and bottom up meet together. I structure my development process around rewriting the software -- I write the first version and then I will refactor/rewrite to remove any unnecessary complexity until I am happy with it. There is no working software for a long time and then suddenly it is complete. And when it is complete there is no more testing stages, bugs to fix -- it is truly complete.
So you see, I am probably too alien a developer to give advice to general population of developers.
And when I do talk about my ideas it usually ends in flame wars or drowns being downvoted to hell because people tend to downvote anything and everything that does not confirm their existing worldviews.
> At first thought a moon of a moon didn’t seem to be possible as the gravity of the moon’s planet would certainly make an orbit around a moon unstable.
No, that's not true. Moons can have moons with stable orbits. Not every moon can, though.
If you think about it, the Moon already orbits Earth which orbits Sun.
Our Moon could have satellites but the issue is that it is lumpy. It is not uniform in density and this causes huge gravitational anomalies which prevent long term stable orbits. If Moon was uniform it could have stable orbits.
As much as I detest MongoDB immaturity in many respects, I found a lot of features that are actually making life easier when you design pretty large scale applications (mine was typically doing 2GB/s of data out of the database, I like to think it is pretty large).
One feature I like is change event stream which you can subscribe to. It is pretty fast and reliable and for good reason -- the same mechanism is used to replicate MongoDB nodes.
I found you can use it as a handy notification / queueing mechanism (more like Kafka topics than RabbitMQ). I would not recommend it as any kind of interface between components but within an application, for its internal workings, I think it is pretty viable option.
Funny enough, we designed one subsystem to use RabbitMQ to enforce linear committed records into mongodb to avoid indices. I.e. the routes in rabbitMQ would ensure a GUID tagged record was spatially localized with other user data on the same host (the inter-host shovel traffic is minimized).
Depends on the use-case, but the original article smells like FUD. This is because the connection C so lib allows you to select how the envelopes are bound/ack'ed on the queue/dead-letter-route in the AMQP client-consumer (you don't usually camp on the connection). Also, the expected runtime constraint should always be included when designing a job-queue regardless of the underlying method (again, expiry default routing is built into rabbitMQ)...
MongoDB's change stream is accidentally very simple to use. You just call the database and get continuous stream of documents that you are interested in from the database. If you need to restart, you can restart processing from the chosen point. It is not a global WAL or anything like that, it is just a stream of documents with some metadata.
> If you need to restart, you can restart processing from the chosen point
One caveat to this is that you can only start from wherever the beginning of your oplog window is. So for large deployments and/or situations where your oplog ondisk size simply isn't tuned properly, you're SOL unless you build a separate mechanism for catching up.
Yep, absolutely. But the side effect I am referring to (and probably wasn't clear enough about) is that the oplog is globally shared across the replica set. So even if your queue collection tops out at like 10k documents max, if you have another collection in the same deployment thats getting 10mm docs/min, your queue window is also gonna be artificially limited.
Putting the queue in its own deployment is a good insulation against this (assuming you don't need to use aggregate() with the queue across collections obviously).
I do agree, but listen... this is supposed to be handy solution. You know, my app already uses MongoDB, why do I need another component if I can run my notifications with a collection?
Also, I am firm believer that you should not put actual data through notifications. Notifications are meant to wake other systems up, not carry gigabytes of data. You can pack your data into another storage and notify "Hey, here is data of 10k new clients that needs to be processed. Cheers!"
The message is meant to ensure correct processing flow (message has been received, processed, if it fails somebody else will process it, etc.), but it does not have to carry all the data.
I have fixed at least one platform that "reached limits of Kafka" (their words not mine) and "was looking for expert help" to manage the problem.
My solution? I got the component that publishes upload the data to compressed JSON to S3 and post the notification with some metadata and link to the JSON. And the client to parse the JSON. Bam, suddenly everything works fine, no bottlenecks anymore. For the cost of maybe three pages of code.
There is few situation where you absolutely need to track so many individual objects that you have to start caring if they make hard drives large enough. And I managed some pretty large systems.
> I do agree, but listen... this is supposed to be handy solution. You know, my app already uses MongoDB, why do I need another component if I can run my notifications with a collection?
We're in agreement, I think we may be talking past each other. I use mongo for the exact use case you're describing (messages as signals, not payloads of data).
I'm just sharing a footgun for others that may be reading that bit me fairly recently in a 13TB replica set dealing with 40mm docs/min ingress.
(Its a high resolution RF telemetry service, but the queue mechanism is only a minor portion of it which never gets larger than maybe 50-100 MB. Its oplog window got starved because of the unrelated ingress.)
You have a single mongo cluster that's writing 40M docs a minute? Can you explain how? I dont think I've ever seen a benchmark for any DB that's gotten above >30k writes/sec.
Sorry for the late reply here, just noticed this. You're correct that figure was wrong, that metric was supposed to be per day, not per minute. Its actually closer to 47mm per day now, so roughly 33k docs/min.
> I dont think I've ever seen a benchmark for any DB that's gotten above >30k writes/sec
Mongo's own published benchmarks note that a balanced YCSB workload of 50/50 read/write can hit 160k ops/sec on dual 12-core Xeon-Westmere w/ 96GB RAM [1].
Notably that figure was optimized for throughput and the journal is not flushed to disk regularly (all data would be lost from last wiredtiger checkpoint in the event of a failure). Even in the durability optimized scenario though, mongo still hit 31k ops/sec.
Moving beyond just MongoDB though, Cockroach has seen 118k inserts/sec OLTP workload [2].
It's a strawman. If a company decided they were going to hire someone, they wouldn't suddenly decide to spend twice as much hiring that person, just in case they have a layoff in the future. It just doesn't follow
There is only one reason the sea can rise currently due to weather and that is glaciers getting melted.
Another reason (but not due to weather) is that when land somewhere goes up, it will displace water everywhere else. And so, for example, if the land is still recovering from the ice age we should see ocean levels going up everywhere except for the pieces of land that are recovering from the weight of the glacier that is no longer there.
Mind that Arctic is not causing sea level rise. Any ice that is floating on water will not cause any water level change when it melts. (I know this is somewhat unintuitive but it comes directly from Archimedes principle)
So we are talking basically Antarctic ice and Greenland because these are by far the largest bodies of frozen water that are supported by land rather than floating on the ocean.
I think it should be pretty easy to observe how much of that water melted or slipped into the sea.
I also think that currently, coastal erosion is mostly caused by changing weather patterns. Basically this comes down to wind blowing in different directions, speed and variety and these changing patterns mean coasts are eroding in different places than before.
First of all, most of the temperature rise only happening close to the surface with average surface temperature rise being only about 1.5F or 1C since 1901.
Furthermore, at around 4C which is what deep ocean water is close to (everything below 200m is essentially 4C), thermal expansion is almost nil. For colder water thermal expansion is actually negative.
4C is when water is at its densest. It is not an accident that all oceans are 4C, because 4C water sinks to the bottom and anything colder or hotter than 4C floats up. This remarkable property of water is what causes even shallow water to be fantastically stable in temperature -- a lake that has more than couple tens of meters in depth is likely to be 4C at the bottom throughout the year whether it is frosty winter or hot summer above it, unless some kind of powerful event is able to mix the water in the lake.
Now, the small temperature differences will definitely have outsize effects on water circulation, ocean currents, life and weather. But I doubt they will cause meaningful sea rise unless somebody can calculate otherwise?
That’s mostly accurate, but the nuances are significant and lead to different conclusions. For example the hypolimnion may be much warmer than 4C in lakes in warmer areas. More importantly tropical ocean water is above 4C down to roughly 2km and not only is that depth expected to increase, but also the depth of warm water as you go north. https://en.wikipedia.org/wiki/Thermocline
The important thing to remember is even a 1 part in 1,000 decrease in density * 2km of depth = 2m of expansion. Ballpark estimates aren’t enough you really need fairly detailed simulations to get any significant accuracy. Actually doing such simulations shows meaningful sea level rise from thermal expansion at ~0.07 inches per year or roughly half the current rate of increase. This might not sound like much, but consider that volume of sand you need to replace to maintain beaches etc etc.
No, that is mostly inaccurate. Thermal expansion is small, but there is an awful lot of water. As you point out yourself, thermal expansion contributes about half the sea level rise. Oceans absorb energy just like the atmosphere does and this effect has been known for quite a while (e.g. https://www.nature.com/articles/330127a0 ).
The average ocean surface temperature is about 20°C and the thermal expansion coefficient is 0.000207/°C (https://www.engineeringtoolbox.com/water-density-specific-we...). If I have my google-fu and math right, that's about 1cm/° for a 50m deep water column.
Thermal expansion of surface water is not negligible.
Except water in oceans is at 4C (at least almost everything deeper than ca 200m). And in the vicinity of 4C the thermal expansion is very negligible. This graph should explain why: https://images.app.goo.gl/FXzvTkPvE9dYxoUA7
Umm. I’m pretty sure if water that exists above the water line is melted into the water line, the overall water line will rise. You can directly observe this in a glass of ice that starts with no liquid water will melt into a glass of liquid water.
I could be wrong about this (it's been awhile since I took chemistry), but I think the ice has to be floating for the Archimedes principle to apply.
You can fill a glass of ice-water right up to the brim, and it won't spill over as the ice melts. But only if the ice is floating in the water. It's because the ice's mass pushes down on the liquid water, displacing a fixed amount relative to the weight of the ice.
Ice is about 91% the density of water. If you have water at 0 height, and you put in water equivalent to +1 unit of height. If it's in liquid form, obviously height goes up by +1. If it's in ice form, there is +1.09 height worth of ice (because it expands when it freezes), but it only displaces water up to +1 unit of height in order to support its weight through buoyancy. The overall change in height is +1 unit regardless of whether it's liquid or ice
No, it's not just due to weather (you probably mean climate anyway). For example, ice melting in one place, say Antarctica, will affect sea levels elsewhere on the planet because of weaker gravitational forces where the ice used to be. Geoscience is complex and measuring changes on such a global scale is not "pretty easy", even if it's only sea level changes. It's nothing like a bathtub or elementary school physics.
Not to rant but this is one of those threads again. The majority of comments contain misinformation.
Products like this are great. But if you already know the technology, know what is happening and you are using it to basically eliminate a bunch of coding or setup tasks that you would have to do.
The problem this is only great for a certain type of knowledgeable developer who is also setting up the system.
The next person that comes or a junior developer who does not have grasp of the underlying tech will just see a bunch of magic. But he will be required to be productive with it.
So if the original devs leave and don't get people with grasp of the underlying tech, the new people will start accumulating technical debt at an ever faster rate without ever being able to pay it back. Because with so little knowledge and expectation of performance, whenever something small happens that requires you to learn stuff you will skip learning and just move stuff around hoping some random change will fix it.
So I will amend my original statement:
"Products like this are great." But only if you are working on the project alone and don't expect anybody else to join it in the future.
Dubious uses of carbon nanotubes only surpassed by Eliezer Yudkowsky. Though, I wonder if the real message of the series is "don't let women get into positions of power, because they lack the resolve in acausal bargaining related to existential second strikes", and it just went over everyone's head.
It is usually harder to get rid of any weeds when your crop is growing on the field. After harvest, you just get rid of everything, turn the soil (but I don't know anything about rice farming) and you kinda get a fresh start.
ON THE OTHER HAND, we already have robots that can easily identify weeds with a camera and zap them with lasers or other shit. Which just seems so much more reasonable in the near future than trying to harvest individual rice grains off of a perennial plant that cannot be damaged so that the plant can provide sun cover to prevent weeds... just not seeing it.
Also, Chinese harvests rice up to three times a year. And the harvest itself is much easier if you can grab entire plants. So I am not convinced anybody is going to get interested in it.
The assertion that a perennial plant "cannot be damaged" is also incorrect, many perennial crops benefit agronomically from a "trim", even non-perennials have economic circumstances where it makes sense to "damage" the crop, e.g. grazing winter crops in the late fall.
With the exception of trees, most plants that establish after their taller peers do not try to dominate the canopy. They have already lost that fight. Instead they either stay low and work with the limited light available, or they aggressively grow in the spring to complete their entire lifecycle before the deciduous canopy closes at the beginning of summer. They might steal a little fertilizer, and some water, but the jury is still out on whether some plants conserve as much water as they use. It's a fuzzy enough area that you can find people who claim that some plants increase total available water.
We are just beginning to fully appreciate all of the ways trees have to ladder up from the forest floor to the canopy, and some of the things we perceive as competition may be a misunderstanding.
Most problem weeds on the prairies where I farm are taller than the planted crops when mature, particularly as yield benefits have been gained by shortening more and more varieties over time. A pigweed or a smartweed will outrun a wheat crop with a week of head-start like nothing. Corn can't take any competition at all, I had a planter row in my Barley of corn one year, I figured the corn would simply outgrow the barley and leave it in its proverbial dust.. quite the opposite, I sprayed the Barley out at about the 4-leaf stage and the corn was a full 2 feet shorter than the row beside it at maturity!
That all said of course a natural prairie is different, the tall-grass prairie - as far as I understand it - competes for water under the ground more than light above it - shade can actually help! (Lord knows I mow under my trampoline twice as often as the rest of the lawn.)
This is also true in nature. Weeds (or think invasives in an ecological) tend to be plants that do best in disturbed contexts like fires, landslides, after logging, or in the agricultural sense, tilled soil. Invasives tend to struggle to gain a foothold in established healthy ecosystems
With typical rice farming in Asia, you see manual transplanting of each seedling in to small fields by women and children. This is back-breaking work and extremely slow. I can't think of another crop both this widely cultivated and painstaking. Therefore, not having to do this is a huge win.
China has a multitude of climatic areas. Some harvest rice once per year, others two or three times per year. Even within Yunnan, the province in which tests are being made, all such areas exist.
Many rice farms are small scale and on inaccessible land at the bottom of river valleys, terraced up hillsides, or otherwise inaccessible locations which would provide substantial scale-related and physical challenges to ease of automation. Try running an automated robot over this topography: https://upload.wikimedia.org/wikipedia/commons/7/70/Terrace_...
Rice planting machines exist, doesn’t look much different from home lawnmowers. IIRC, in spring farmers make a batch of rice sprouts on disposable cups, load them into this, and it lays them down as the old man drives across his field back and forth.
I believe "flat" rather than "rich" is more of the limitation. Machines cost almost nothing in China, but the access or field topography very often makes them untenable.
Are you talking about the machines that can mechanize the planting of rice seedlings? I find it hard to believe that poor farmers in China can afford these machines, unless this state is providing for free, or with zero interest loans and very long repayment periods.
Also, most rice farming in China isn't done on difficult topography. I call that stereotype the "National Geographic" effect. If you look where most rice farming is done, it isn't very steep or mountainous. See more here: https://www.statista.com/statistics/242360/production-of-ric...
China makes almost all of the world's EVs, the world's motors and the world's wheels. Putting together any kind of agricultural vehicle is cheaper in China than elsewhere. That's just how it is. How do I know this? I just returned from seven years making robots in China. I will stop short of looking them up and pricing them, but feel free to do so.
AFAIK the Agricultural Bank of China is the largest bank in the world by some measures. It almost certainly has the most branches, and these are overwhelmingly located in agricultural towns and villages. Recall also, in communism sharing equipment is normal. People don't all need to buy their own. Chinese villagers like villagers elsewhere help one another.
Yes, flat areas exist. But they are not the test area, Yunnan. And if you were to look at the historical spread of rice farming (a subject of considerable academic debate) you would notice that all academic suggestions of the earliest invention of rice agriculture itself, with very few exceptions, appear to disperse through or via the test area, because it is the natural headwaters of rivers feeding the majority of mainland East, Southeast and (very close by) South Asia, including the Yangtse, the Pearl River, the Red River, the Mekong, the Salween, the Irrawaddy, and the Brahmaputra. Given we know that in ancient times river valleys formed natural communications paths this give us a fair case for the general dispersion of rice farming technology specifically through steep terrain areas and specifically through the test area.
It might be dated information, but I'd heard 3 crops per year going back at least as far as 1900 (Farmers of Forty Centuries), but not that it was three rice crops per year. Instead a rotation of different crops in the same field.
One of the tricks with rice is you can germinate it in one field, then transplant it to something like 4-6 times the same space to grow to maturity.
What about the people practicing permaculture and regenerative agriculture? There's been a big movement there to perennialize crops, and there are well-known patterns and practices from there to work with perennials.
You should see how they harvest lavender. If I were an anthropomorphic plant I'd probably loose my lunch after seeing one of those videos.
This is my impression of why we can get away with this. Annuals tend to gamble with weather conditions. There's enough seed bank stored up from previous years that if a scouring windstorm breaks all of the stalks in an area, then the seed bank can help recover next year, and if that's not enough then some seeds will blow in from the edges eventually, and ten years from now you can't tell.
Perennial plants have to be sturdier. Only some, such as alpine species, are adapted to drop damaged limbs. They are used to being jostled by hail, storms, and herd animals, so the insult is less permanent.
I don't know how that translates to perennial grasses, except that most such grasses can and sometimes do burn to the ground, and regrow each year from rhizomes. Half the plant survives each growing season, and the other half is sacrificial.
Industrial farming of the sort you describe is destroying arable land, and relying on petrochemicals and strip mines to keep marginal land on life support. Properly managing the land using a mix of perennial crops and occasional rotation to pasture restores the soil, builds fertility, and requires very little in the way of inputs. Healthy plants in a polyculture setup where niches have been pre-filled also resist weed invasion better than monocultures.
We do a lot of stupid shit in farming for the sole reason that farm machinery is specialized, and proper land management costs more (in the short run, in the long run the land is much more productive).
When I captain a boat it is more important for me to communicate things unequivocally. I need to be able to specify what I want to happen, precisely. The crew does not really need to concern themselves with the direction the boat is moving or what is the bigger plan. They just get orders to do specific tasks. Only I (or whoever is responsible for the boat at the moment) really need to understand the context of those tasks although ideally the crew would also understand why they are doing things so that they can anticipate further orders or signal when I make a mistake.
Why this is important is because the plan may sometimes change and then my job is to put a new set of orders and I can't really start by explaining what the new changed plan is or getting any consent.
I was never in a situation like that, but if I was captaining a boat that can change directions, I would still keep stable naming that does not change when the boat stops or changes directions.