Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians Find Wrinkle in Famed Fluid Equations (quantamagazine.org)
240 points by digital55 on Dec 22, 2017 | hide | past | favorite | 120 comments



I'm struggling to understand the significance of this, at least as the N-S equations are used in the real world. Many years ago, I interned with the Navy writing fortran code for fluid dynamics simulations on submarine hulls, and IIRC there were flow dynamics we observed consistently in the real world (e.g. oscillating vortices) that were fundamentally inconsistent with the results coming from our N-S calculations (which would say there could not be an oscillation because it was a steady state flow). There was always a hand-waving of "N-S is actually right; our computer models are just not fine-grained enough." But at the same time, given the computational limits of our grids(particularly at the time -25 years ago), it was understood and accepted that N-S would yield only an approximation. That's only one data point, but it certainly seemed to me that no one was relying on N-S as an accurate predictor of motion (as you would a newtonian model of a ball rolling or something like that), but rather just as a first order approximation. If that impression is accurate, a result that says N-S isn't always accurate is kind of a statement of the obvious. What am I missing?


The beginning of the article does a horrible job of explaining the big question. The Millennium Prize problem[1] is a math problem: whether the Navier-Stokes equations always have a solution with certain properties. Some possible answers to the problem would mean there are some physical situations where they don't produce any prediction at all about what might happen next, or they produce multiple predictions, or they produce physically implausible predictions. (If you have a strong math background, the official problem description might be interesting to you; it's a bit beyond me.[2])

The article does get around to explaining it better if you keep going.

it certainly seemed to me that no one was relying on N-S as an accurate predictor of motion (as you would a newtonian model of a ball rolling or something like that), but rather just as a first order approximation

Numerical methods for solving the Navier-Stokes equations are approximate and therefore diverge from the correct solution. The same is true for a ball rolling down an incline, but the inaccuracies are smaller than you would ever care about in the real world. What your colleagues were saying about the Navier-Stokes equations is that the numerical error was often large enough that the calculated solutions were known to diverge from mathematical reality in significant ways, and therefore seeing them diverge from physical reality was consistent with physical reality and mathematical reality being the same.

[1] http://www.claymath.org/millennium-problems

[2] http://www.claymath.org/sites/default/files/navierstokes.pdf


As ekelsen mentioned, you probably were not doing a direct numerical simulation (DNS; using NS specifically) and instead were using an approximation to NS which has much lower computational cost/complexity but also reduced accuracy. Good LES would converge to the DNS result if the grid were fine enough. The term for this sort of error is "model inadequacy error", that is, error from the model being wrong.

My impression is that so far DNS matches experimental results well given that the experiment actually represents the situation of interest. For example, I am aware that at least some "Kelvin-Helmholtz" experiments don't match DNS well at all, and the DNS is considered more credible than the experiments because in the DNS case you know all of the inputs, whereas in the experiments the initial conditions might be close to the desired case, but apparently not close enough. (The Kelvin-Helmholtz instability is of fundamental importance but is not easy to obtain in isolation experimentally.) "Sensitivity dependence on initial conditions"/chaos means that close may not be enough.

There also is the issue of numerical error from the fact that you are using discrete equations, but usually simulators take steps to check this is negligible. (Which may not be enough.)


I think it's likely you were solving the Reynold's Averaged NS - https://en.wikipedia.org/wiki/Reynolds-averaged_Navier%E2%80... and dropping the turbulence term to get what you call "steady state".


> That's only one data point, but it certainly seemed to me that no one was relying on N-S as an accurate predictor of motion

You're confusing how models are used by (some) engineers (in some applications) with how models are used by physicists and mathematicians. Engineers have to deal with all kinds of uncertainties, from matetial parameters to use cases to limit states to wear and fatigue and geometrical deviations etc etc etc. Therefore, engineers develop robust designs to comply with all design requirements under any plausible and probable scenario given a design life. To accomplish this, engineers use models to provide approximate but accurate results that are on the safe side of any limit state. Yet, eventhough designs need to be robust, simulations do need to be accurate.

These findings suggest that low-resolutoon Navier-Stokes simulations that were believed to be on the safe side may actually not be on the safe side. These finding are important, as they will illicit significant changes on how Navier-Stokes simulations are used in cases where accuracy matters.


N-S aren't "right" though. The derivations make a lot of good assumptions that breaks down in certain materials/situations.

What is right is its starting point on the conservation of momentum and energy. Then it makes certain assumptions about the stress-tensor which are not necessarily true. Meaning, you can derive the N-S from consv. of mass and E and a certain stress tensor (ST), but its not derived from a universal ST.


NS are a pretty good model for the underwater scenarios a Navy would be interested in, so I think they can be called "right" here. The fluids are regarded as "Newtonian" so the stress tensor model is good, and the density of the fluid is high enough that the continuum approximation is good. The largest source of error is likely the approximations made to model the turbulence, or in other words, reduce the computational complexity while also reducing accuracy.


What I got from the article is that there's a possibility of a chaotic result: it may not be possible to get computed results which are arbitrarily close to the real-world results. I was reminded a bit of the famous Lorenz system (originally for weather prediction, IIRC), which turns out to be chaotic under some conditions.


I think it's not a chaotic result as such (as the Lorenz weather prediction models you're referencing are) - which would mean large differences in outcome from very small differences in input, but a case where multiple different outcomes can arise from identical inputs.


> but a case where multiple different outcomes can arise from identical inputs.

If they are identical but not equal and these small differences in the input lead to large deviations in the output then that's pretty much the definition of a chaotic system.

Edit: I've just browsed through the paper in question and it actually demonstrates thar an approximate (weak) solution is not unique, which means that the exact same inputs in may have multiple weak form solutions.


"Weak" here does not mean numerical approximations. These are mathematicians so any quantitative approximation would be bounded o(1) otherwise the work would be meaningless. One should think of "weak" as in constraints, for example constraints at lower spatial resolutions (but precise).


Right. It’s already known that NS solutions can be chaotic. It would be a big deal if they turned out to be non-unique.


"When I meet God, I am going to ask him two questions: Why relativity ? And why turbulence ? I really believe he will have an answer for the first." - Werner Heisenberg.


"I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic." - Horace Lamb


Based off the abstract, there's one class of weak solutions, the Leray solutions, which are known to exist. And now they've shown for a different class of weak solutions to NS that the solutions are not unique. Is that right?


Yeah, they relaxed one of the criterion’s for Leary solutions (the energy inequality) they’ve proved that these solutions are energetically unstable, and that doesn’t strike me as very profound (I need to go read the paper though, I haven’t had time yet).

Just mentioning at the bottom of the article “yeah now we are going to see if the same thing applies to proper Leary solutions, we think it does” means close to nothing, honestly.

And even if it does, the article’s author is right when he remarks that this can be seen entirely as a warning against using approximations that are too broad or coarse.

I’ll add that I find it funny that nowhere in the article (that I can see, but I am reading on mobile Safari, so maybe...) is the Navier-Stokes partial differential equation even displayed, and the relationships it defines are not explained (other than some waffle about ‘derivatives’).


That's correct, while taking away the entropy inequality as well. The thing is that very inequality is one of these laws of thermodynamics we so love, so I kind of see this as hot air.


Aye either they have upturned a theory as august as general relativity or they’ve thrown the baby out with the bath water somewhere.


What's funny is the article says this about the Navier-Stokes equations: "The equations work. They describe fluid flows as reliably as Newton’s equations predict the future positions of the planets"

Newton's equations do not in fact reliably predict Mercury's orbit, and it took GR to do it. Lazy journalist!


The Navier-Stokes equations assume that the medium is continuous even at infinitely small scales, which obviously not the case for natural fluids, that are made of discrete atoms. Thus the equations are only correct at sufficiently large scales. They work fine for describing the airflow around an aeroplane, but not the airflow around the head of a hard drive, which is small enough that the finite size of atoms must be taken into account. On a more visible scale, you have Brownian motion of small particles, which can be seen even in a low magnification microscope. The Navier-Stokes equations predict that these effect does not exist. The equations are still useful approximation in a lot of cases.


If the equations are only useful approximations anyway, why are edge case breakdowns so important or surprising like the article seems to indicate? If they are as imprecise as Newton's laws, why are mathematicians looking for "unfailing" precision from them?

Or is the article simply wrong in the initial few paragraphs?


The Clay problems are pure math problems. Any approximate application they have to the real world is just an accident.


I think part of it is just mathematical interest, and maybe part of it is hope for more efficient or otherwise better approximations for fluid behavior.


How are Navier Stokes the other way, on the macro scale? e.g. in the context of meteorology. Asking because I had a discussion with somebody recently where they claimed the fundamental flaw to the science underlying Climate Change science is over-reliance on NS at macro scale as a way of predicting climate behaviour, or something. I took it to be Baloney, but I'm wondering if there is some strands of truth to it ...


Fluid equations assume that there's a single characteristic velocity for the atoms at each point in space. So, for physical systems where local velocity distributions are wide or even multi-modal, fluid equations won't capture the physics.

In plasma physics, Laser Wakefield dynamics is an example of a system that can't be modeled as a fluid.

I sort of doubt these considerations apply to the atmosphere, but this is one of the main heuristics for when you can't use a fluid equation.


As you intuit, there is no reason to presume that Navier-Stokes would be unreliable at macro scales relevant to meteorology, simply because it is so thoroughly tested in experimental settings and to such sensitivities that it is known that all relevant factors are accounted for.

(Of course, why would one presume that if it is inaccurate at planetary scales, it biases observations towards the climate change narrative? It's just the typical “God of the gaps” kind argument.)


“God of the gaps” yeah that’s pretty much what I thought!


Who you were talking to doesn't seem to know what they were talking about.

alephnil mentioned a real problem, but the solution in that case is to not use NS. From a practical standpoint NS is a good model of fluids in many instances because there is a certain minimum scale of motion due to viscosity (the Kolmogorov scale) and this usually is much larger than the size of the atoms or molecules. If this is true then a continuous approximation is fine. No present climate simulation can afford to compute everything down to that scale, so a low pass filter is applied to filter out the small scales and turn their effect on the large scales into a single term that can be modelled. This turbulence modeling approach is called large eddy simulation (LES), and it relies on the fact that outside of certain special cases (e.g., major chemical reactions) the small scales have a universal behavior. (Kolmogorov was the first to propose that the small scales are universal back in 1941.) This approach works pretty well usually. If the person you were talking to said the small scale model was wrong, I'd give them more credit, but this approach is generally the most accurate moderate cost turbulence modeling approach.


The Navier-Stokes equations do not in fact reliably predict the flow of all real-world fluids. The comparison to Newton's equations is perfectly apt. Good enough for most applications, but may be imprecise in some cases.


Sounds like a perfectly accurate statement to me, in both directions.


Based on my fairly layman understanding it could be as simple as the "weak solutions" not being as useful as they were thought to be.


That seems to be one of the explanations provided in the article.


The laws of thermodynamics are quite a bit more august than GR


I’ll leave such distinctions to more seasoned minds.


Well general relativity breaks down at very small scales, and imparts the need for quantum mechanics ...


If you are really interested, somebody has bothered to make a special-relativity & quantum-indeterminacy compatible formulation of Navier-Stokes suitable for calculating shockwaves in extremely dense mediums such as the neutronium neutron stars are thought to be made of (where the speed of sound is comparable to the speed of light in a vacuum, hence the relativity, and the matter is degenerate and in states of superposition, hence the quantum indeterminacy). I don’t have the reference right with me at the moment (mainly because I just looked at it and though “oh horror!” and averted my eyes) but I know where and how to dig it up for you if you want.

Of course... for obvious reasons it hasn't been experimentally verified...

For the record: I also saw it coded in FORTRAN. Yeah. It's like catching grandma in starkers.


Sir, would you please post a link to this code?


It wasn’t my code to link to and I probably last saw it in 1996, back in the era of dot-matrix printouts on green-and-white continuous paper (on a Windows NT 3.5 machine running FORTRAN PowerStation, if you care to commiserate). I think I can dig up the paper where the equation was published, though.

EDIT: This isn't the paper I had in mind, and the equation presented is ‘merely’ relativistic, but it gives you a feel for the beast: https://arxiv.org/pdf/astro-ph/0402502.pdf (see section C).


Thank you! If you have more time to look for the original, I would find it exceptionally interesting; however I understand if the keywords are lost to you.

Can you explain roughly what the quantum corrections were?


The quantum corrections basically dealt with the fact that the waves occurred on the scale at which quantum indeterminacy obtains and as such you had to account for the fact that the neutrons had both been moved and had not moved, leading to a superposition of states.


Was this for starquake or GRB modeling?


Isn't that possible though if the fluid isn't a closed system? (which it probably isn't)


All of the above arguments obtain if the fluid is a closed system and is only acting as a result of the forces it itself is exerting upon itself, subject to the various boundary conditions around it.

Of course if you open the system various outcomes are possible depending on what the external influences do to it (for example, the two solutions mentioned for the still water become entirely plausible if there's somebody roaming around who might put a lighter under the glass and cause the water in it to boil, but that isn't the point of the exercise).


Who would have guessed that this story about the behavior of fluids seems to kind of leak.


Btw this is work relating to one of the famed Millennial Problems - http://www.claymath.org/millennium-problems/navier%E2%80%93s...


As the article pointed out.


   Using this approach, Buckmaster and Vicol 
   prove that these very weak solutions to the 
   Navier-Stokes equations are nonunique. They 
   demonstrate, for example, that if you start 
   with a completely calm fluid, like a glass 
   of water sitting still by your bedside, two 
   scenarios are possible. The first scenario 
   is the obvious one: The water starts still 
   and remains still forever. The second is 
   fantastical but mathematically permissible: 
   The water starts still, erupts in the middle
   of the night, then returns to stillness.
Also permissible, and also vanishingly unlikely, under quantum theory.

It's a stretch, but I wonder if it's reasonable to think of data points in the vector field describing fluid motion as probabilities rather than definite measurements. That might allow their behavior to be modeled with different mathematical tools.


You are on the right track. You have reinvented the basic idea behind the "Reynolds-averaged Navier–Stokes" (RANS) equations, dating back to 1895. The goal is to predict an average velocity field, and this requires modeling unclosed terms (the "turbulence problem" in one respect). RANS and statistical methods in general are the most popular turbulence modeling approaches and much of turbulence theory is based around these ideas. My opinion is that RANS answers are typically what you want, but the models don't work that well. LES (large eddy simulation; applying a low pass filter to NS) is a newer approach that is gaining popularity, and makes physical sense, but is a lot more computationally expensive. And direct simulation (DNS) is an option too, but the computational costs are prohibitive outside of some relatively simple academic problems.


If you are going to describe the fluid motion as probabilities, are you going to assume those probabilities to be independent or not?

An independence assumption really helps with computation. However, considering the `eddies' in turbulence, I'd suppose there is large and complex coupling between the probabilities of flow even for positions that are separated.

You might deal with the coupling, but at a first guess that feels like it requires tracking many branching paths, which would have exponential memory requirements.


Also FTA :

>>> Nonunique Leray solutions would mean that, according to the rules of Navier-Stokes, the exact same fluid from the exact same starting conditions could end up in two distinct physical states, which makes no physical sense and implies that the equations aren’t really describing what they’re supposed to describe.

This basically say that from one given starting conditions one only expect one (and only one) outcome. Doesn't this conflate a model with the actual physical reality ?


I'm not sure what you're getting at. Classically we expect that given the same initial conditions, the outcome of a fluids experiment should always be the same. If an equation meant to describe fluid flows doesn't have this property, it is probably a bad model.


Not really. We know that the idea of a "fluid" has to break down eventually (at least due to finite particle sizes, possibly due to discretisations of space or whatever). An equation which describes "fluids" at sufficiently low resolution and breaks down once a resolution is reached where the idea of a "fluid" breaks down as well is then perfectly sufficient and, combined with the idea of a fluid, gives us as good model for reality.

Contrary to what the article claims here:

> the exact same fluid from the exact same starting conditions could end up in two distinct physical states, which makes no physical sense

it makes quite a lot of "physical sense" to get two possible outcomes out of one initial state, though we would not expect the Navier-Stokes equations to describe that situation.

The article is simply wrong in arguing that because we expect classical fluids to behave classically and because Navier-Stokes may break down in certain limits, NS may be a bad model. I actually struggle to put together a coherent sentence which comes close to what the article tries to say regarding the relation between "physical sense" and our expectations for the results of Navier-Stokes.


Yeah, I wondered about this too since the typical NS BVP, velocity vector is assumed to be confined to the ___domain of real numbers[0].

[0] https://warwick.ac.uk/fac/sci/statistics/staff/academic-rese...


   The second is 
   fantastical but mathematically permissible: 
   The water starts still, erupts in the middle
   of the night, then returns to stillness.
Big bang in the middle of the night.

Some quirling around until all matter and energy is equally distributed. What a journey!

Then returns to stillness.


> Also permissible, and also vanishingly unlikely, under quantum theory.

Navier-Stokes has nothing to do with Quantum Mechanics..


You missed the point of their comment.

The responders' point was that Quantum Mechanics is a different framework for modeling physical phenomena which takes a probabilistic framework, and so if fluid were to be modeled in a similar framework, you could work more naturally with these "vanishingly unlikely" events.


Quantum Mechanics is entirely deterministic and linear.

(It only become non-deterministic, when you muck around with collapse of the wave function.)


Yes well the Copenhagen interpretation is the most widely accepted interpretation of Quantum mechanics, so it's not crazy incorrect to equate the two


Navier-Stokes is only incidentally related to real life. It's pure math.


Right, smooth vector fields aren’t actually physically realizable.


Navier Stokes is probably Turing Complete.


Probably, yes. But the gymnastics required to make it Turing complete are unlikely to be in the regime that describes real fluids.


In the sixties some people experimented with fluidic logic gates.


Those probably relied on friction?


No one said it does. But given that statistics based on complex probabilities are effective in one ___domain, maybe they're worth investigating in another.


No need for quantum mechanics, it's permissible under statistical mechanics. It is incredibly unlikely to happen though (and any way of making it happen necessarily takes a lot of effort).


Totally unfamiliar with the math and physics, but I recall the name from one of the earlier parts of Cryptonimicon.


Navier & Stokes is the name of the scientists who indecently formulated the equation that bears their name that describes the motion of most fluids. In Cryptonomicon the main character is given a simple question about a boat on a river during his admission to the army, deploys heavy-duty fluid dynamics to give a non-obvious answer, and consequentially gets classified as a moron and relegated to menial duties (which suit him just fine, as I recall).


Yeah it was a pretty funny scene - it's supposed to be this simple math question, but the ... "Asperger"... he might be called these days - character, Lawrence Waterhouse, really digs into the problem. He ends up discovering something that he submits to a math journal for publication, but the army folks think he's an idiot.


The author explaining the paper: https://www.youtube.com/watch?v=F71SRP3MZcw


This makes senses to me. If you run a simulation at too low a resolution you run into ambiguity. By refining the grid you can resolve that ambiguity. Turbulent flow is chaotic, so this makes perfect sense to me. What this may lead to is a method of determining criteria for adaptive grid refinement. But I thought that already existed, so maybe just an improvement over what's out there.


This doesn’t have to do with numerical error introduced by discretization. This has to do with uniqueness of solutions.


That is fascinating. Is there any sort of immediate real-world impact (like to weather forecasting)?


Immediate? Certainly not. Weather models do have incomplete information, but the equations are approximated by Taylor series (1st to 3rd order, depending on the model, last time I checked).

It's possible that this can explain some of the differences between models or ensemble runs... but you have to realize that most of the error comes from incomplete data in the initial and boundary conditions. Looking for weather model effects is like looking for relativistic effects in automobiles.


Almost. Most models use some variations of Runge-Kutta approximation, to third order usually.

Edit: typos


Convergence, or at least consistency and order, of Runge-Kutta methods is shown via Taylor series, so I don't see the problem or why the GP is only "Almost" correct.


The Runge-Kutta methods are not a Taylor series though. So "almost" is apt.


Geez guys, I chose a term that I thought most people would be more likely to understand. I realize that some of us have worked a lot more on numerical methods, but other hackers never made it past Calculus 2.


I wouldn't have corrected you. I think your comment was fine.

> I realize that some of us have worked a lot more on numerical methods, but other hackers never made it past Calculus 2.

While that didn't occur to me, I do appreciate trying to keep things accessible.


Did you take offense from my comment? All I wanted is to set people on the right track if they want to find more information about how NWP is implemented.


In addition to the other fine replies, weather forecasting's inaccuracies are dominated by the lack of information, then by lack of processing power. Lack of closed-form solutions to NS or better solutions rates quite a ways down the list of issues it has, or put another way, even if we had a magic box that completely accurately solved NS for weather forecasting, it would not get that much accurate. (My suspicion is that it would literally be measured in "minutes" more accurate, rather than the "days" you'd like, but I concede I can't prove that... but bear in mind that it may well be the cases that it would be milliseconds more accurate or something, not just that I could be wrong about it being "days", as the errors in the initial data compound over the course of the simulation no matter what math you throw at the problem.)


It seems popular to believe that for fluids in general forecast accuracy is dominated by errors in the initial conditions (ICs), but my own look at the problem suggests that's not so clear. I recall skimming a book on forecasting by a weather forecaster and he addressed this misconception. It appears that there are multiple sources of error, from errors in the ICs to numerical integration to the fact that the models they use are approximate (i.e., they don't solve NS; they solve a filtered version of NS with a turbulence model and additional models for other physics like chemistry), etc. My impression is that the dominant two are model inadequacy (the models are approximate) and compounded errors due to IC errors and non-linearity, but which is larger likely depends on the problem, and I am not particularly confident about this in general as I don't have hard data. (Certain types of turbulence models get more accurate as the resolution/computational cost increases, but I can't speak for other models. This fits with what you said about lack of computational power.)

The right way to do this is through uncertainty quantification techniques, and I don't know a lot about those at the moment. Until then, all I can say is that there are multiple sources of error.


Fluid dynamicist here. At the risk of looking like a heretic, I am willing to say that I don't think the NS Millennium Prize problem and related things will end up being very useful practically.

I can't see how a definitive answer to the question will result in better turbulence modeling, which is what matters from a practical point of view. If it turns out that the solutions are not unique then we could probably find an additional condition to add (e.g., the entropy condition) to make the solutions unique. If the solutions are unique, bounded, etc. then that's great and it would have no impact practically speaking aside from perhaps helping the reputation NS has for accuracy. Some people seem to think that solving the NS Millennium Prize problem would likely lead to a solution for the turbulence problem, but as I said, I can't see how. I'd be interested if anyone could explain this belief better.

There may be other benefits. I've found papers that find bounds on different fluid dynamics quantities to be interesting, and the motivation for these studies are the NS problem from what I understand. Unfortunately the results from these papers tend to be less useful than bounds I can derive specifically for applications myself.

(In a nutshell the turbulence problem is that NS has far too high a computational cost/complexity to be used in practical simulations. So cheaper approximations to NS are used, which you can cladsify as "turbulence models". How steep the drop-off in accuracy is as you reduce complexity is an open question. My opinion is that fluids probably require high computational cost for accuracy a-priori. Things like correlations from experiments can get around this as you are using pre-computed results, and that may be what we should go for in my philosophy.)


Financial forecasting and risk-elimination in a system like a blockchain would be my guess — and in that respect it’s probably the most important problem out there IMO.


What??


I think we have just been exposed to somebody's Markov Chain Natural Language Generation experiment.


Ha... I guess I can be a bit terse. Just think of the “turbulence” in navier-stokes as valuation fluctuations. Without the realworld dampening effects of regulation, slow tranactions, and managed markets — volatility along the lines of the infinite incongruities posed in the article are possible. ( note: Im not referring to a pendactic ‘infinite’ wrt a blockchain).


Ah OK I get it now.

I’m an applied mathematician and a macroeconomist that studied turbulence in financial markets and crashes thereof. I can assure you that the dynamics are pretty distinct. In economics wealth is not a conserved quantity whereas in physics energy and momentum are.


I think bitcoin’s fixed-limit and deterministic transactions makes it a uniquely closed & conserved system (regardless of deflation and lost wallets). The discrepancies in valuation among exchanges & localities seem to show a relativistic (to borrow a physics term) quality. [I’m however not really into bitcoin or an economist though]


> bitcoin's fixed limit

That's what makes it unsuitable for being a currency.

> a uniquely closed & conserved system

Nope. Bitcoin may or may not be (lost wallets, as you point out, is one way in which it is not). But the ‘system’ is the economy, because money is moving in and out of bitcoin because it can be exchanged for other assets (goods, services, or other currencies when doing conversions).

So no. I'm not trying to be condescending, I'm just trying to nip this apparently valid but flawed analogy in the bud. The only commonality is the word ‘turbulence’ which is being used as a label for two entirely different phenomena that have some similitude and points of contact but are largely distinct and unrelatable.


I wonder what the interplay of the material's EOS has with constraining these solutions. Note I only had time to skim the article.


Is there any online course I can take that would teach me Navier-Stokes?


I would recommend this text book: Transport Phenomena. Bird, Steward and Lightfoot. The first part is related to the motion of fluids/transfer of momentum ( The one you are interested ). The second part is the transfer of energy and the third part is related to the transfer of mass.


Does anyone have a link to this particular paper?


The link was given in the article: https://arxiv.org/abs/1709.10033


Thanks, I missed that somehow.


Luce Irigaray knew it already!


So will they get the million?


No. They've shown non-uniqueness of solutions of a strictly weaker problem. They have yet to say anything about the actual Navier-Stokes problem.



It's disappointed to see that their reaction to non-unique results is "must be broken", rather than "we've rediscovered the quantum physics uncertainty principle in fluid mechanics".


The Uncertainty Principle is a very well established result which is quite generalized mathematically beyond Heisenberg's result. This has nothing to do with that principle.


If you weaken the vector density until a point that you can generate multiple possible outputs from a given weakened input, then what level of weakening guarantees no unique outputs from the equations?

These equations were designed for a system in which every 'atom' (vector) has a perfectly knowable 'spin', and they begin to produce unexpected results as the uncertainty is dialed up through weakening.

It's just a shame that the considerations stop at "therefore we broke the equations" as opposed to "gee, that looks familiar". What's the Navier-Stokes equivalent of the diffraction experiment? What does the interference pattern of two vector fields even look like? Why aren't they trying to study the interference patterns of the one input, two outputs scenario?

I get that this is all "obviously pointless" to others, but no one I've asked can actually explain why these comparisons are unacceptable. You completely dismiss it without any explanation other than "science is well-established", as if somehow that's meaningful.

So, yeah, disappointment.


You seems to think that there is some kind of analogy between a weakened version of the Navier-Stokes equations having multiple solutions and the double-slit experiment generating an interference pattern. I don't think there is such an analogy.

The equivalent of a spreading wave in quantum physics is the vector field describing the flow according to the Navier-Stokes equations. The equivalent of an interference pattern is the pattern of vortices in the fluid. When you do a double-slit experiment, you'll always see the same interference pattern, and it can be predicted exactly. There is only one solution to the equations. Having two different vector fields satisfy the Navier-Stokes equations with the same boundary conditions would be like seeing different interference patterns for no reason at all.

There is no place for the Uncertainty Principle in this result, because that is a statement about the standard deviations of two complementary quantities, and there are no such quantities involved here.


Not the GP, but you have to take into account that these are studies done by mathematicians, not physicists, so there are to points to consider.

First, weakening is not related to uncertainty, at least not in the normal sense that I think you refer to. It is not related to the physical solution itself but to our own rules of what we consider a solution. If instead of vector fields we were working with animals, the strong solution would be "we need this animal to be a duck" and the weakened one is "we need this animal to quack when we poke it with a stick". So it is not similar to the quantum uncertainty principle nor anything like it.

Second, mathematicians are interested only in these specific equations. Breaking the equations means that they do not model correctly the real world, and finding those new equations is the job of physicists. Maybe there are other conditions on the solutions, or maybe the relaxation they did allows for non-physical solutions. In fact, they do not prove that those solutions satisfy the energy inequality, so it might be possible that all but one of those non-unique solutions are only possible if you allow fluids to magically gain energy out of nowhere (which obviously conflicts with thermodynamic laws).


> Breaking the equations means that they do not model correctly the real world

This is wrong. We already know that these equations do not model correctly the real world. No model correctly models the real world and every model breaks down eventually at one point or another. Showing that a certain set of equations leads to non-unique results under certain conditions means nothing, unless you also show that those ‘certain conditions’ are true in cases where we previously assumed the model to hold. If you only show that in cases where we previously also assumed the equations not to hold, they actually output nonsense, this may be a mathematically curious and nice result but of no physical relevance.

So far, this result looks more like realising that Newton’s gravitational field diverges for a point particle of nonzero mass, which is not really any indication whatsoever that it doesn’t correctly model the real world.


Reminds me of the Banach-Tarski paradox, where non-smoothness breaks conservation.


Can you elaborate on what you mean here?

Isn't B-T just a consequence of accepting the axiom of choice and performing some pathological decompositions of a ball? What do you mean by:

>non-smoothness breaks conservation


Normally translations and rotations conserve volume. But not when you have these extremely non-smooth pieces.


Could you possibly dive into this? The electron slit experiment shows that superposition exists in a fundamental way in our universe. You can even measure quantum effects with objects as large as buckyballs. Isn't it sort of obvious that a precise mathematical description of a macroscale, emergent system based on objects with quantum and probabilistic effects is always going to be an inaccurate abstraction?


The Navier-Stokes equations are for a smooth fluid and don't capture quantum behaviour or even classical/macroscopic Brownian motion. This is known. It's also entirely beside the point.


I'm not going to pretend I understand anything about fluid dynamics.

It brings to mind how people say "Bumble bee's defy the laws of physics to fly".


Isn’t Navier-Stokes smoothness just a reframing of the three-body problem? Karl Sundman solved that one in 1909, and n-body was generalized in 1990 by Qiudong Wang.

Edit- to answer my own question: http://www.scholarpedia.org/article/N-body_simulations_(grav...


I'm sorry if it seems I am persecuting you and putting all your ideas down, but I really don't see how this can be true either.

The 3- or N-body problem is about point-particles interacting gravitationally at nonzero distance according to an inverse square law. Navier-Stokes is, at root and in the limit, about elastic collisions about infinitesimal corpuscles that transfer momentum between each other.

Again, I can see the analogy “lots of things interacting”, but they have quite little in common beyond that.


Just the inverse n-vectors applied (regardless of space and field strength or possibly just instantaneous field strength). Always good to be called out tho.


I don't understand what the first sentence of your reply is supposed to mean, but the second sentence is a very mature response and belies your wisdom: yes, science and rational thinking is all about putting ideas out there and rejoicing when somebody helps you etch away at those that are not compatible with reality, so that only plausible ones remain.


Well what I meant was shrinking the n body system to a point then extending that to a field. But it’s beyond me how the math works to invert those same force vectors to an impulse.


Fluid dynamics and gravitation are not about the things that are interacting (of which there can be many in both cases) but in the forces that arise between these things, and the forces are profoundly different in both cases.


Apologies if I’m cargo-culting math... just positing.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: