I have felt similar, wanting to learn physics sims. I have had to learn over time to separate domains of knowledge like physics from programming tools used to ~manifest them. Especially starting with gamedev, initially it felt like there should be a natural programming idiomatic way to set up each of the major simulations like rigid bodies, cloth, springs, fluids, so the first thought I had was these sims will be naturally encoded in the language, like a fluid sim being somehow setting up a grid, then choosing some update rules per timestep. But really it is modelling a problem as real mathematics and physics, then mapping this to a language/toolset which perhaps can't naturally express it idiomatically.
There are a few algorithms like some cloth sims based on particle positions and springs, which can be coded easily, but that was misleading to me when trying to improve, I had to dig a lot more into physics and numerical analysis then mapping the problem to code, which can end up clunky and with a lot of magic numbers.
As latexr said you can disable watch history. This means there is no home page (even when it should have known my interests, my home page was awful so this isn't so bad). You don't get watch-progress memory on videos, which is simple to adjust to. Recommended are less targeted and I get a lot of the typical ragey youtube stuff but it is mostly half relevant. I no longer watch any shorts which I kept clicking just to "see how bad they are" until it became a habit. And the UI keeps pushing it... If youtube makes the no-watch-history method not work, I'm just deleting the app and waiting until I'm bothered to configure revanced.
I love this book so much. The literate programming style I think inspired from Knuth's cweb, great writing, beautiful high-quality physical book worth buying but also free access to knowledge. The literate-programming style means you are almost immediately applying theory to a practical system, I keep having to take breaks to learn outside math/physics though, but it is self-contained in principle I think.
Why does optimal substructure work for code size? Couldn't you take a compiled non-minimal function and break each instruction into a function call (with some assumptions about the ISA maybe), then assuming each single-instruction function is minimal by itself, infer the entirety is minimal, contradicting?
Ok, I see a nice discussion going here. I may have disproved myself here and save ourselves some effort. First, just to be sure what we're talking about: If split the function in two and compile the two parts to minimum sizes, combining them gives you the global minimum. So, we're not starting from assembly programs.
And, if you split the loop and compile parts of it, you will most certainly not get this single instruction. So, I'm sorry! It seems I have proved myself wrong. I'll try to find some time to revisit this. I hope this helps!
The definition of optimal substructure in the article isn’t quite right.
A problem can have optimal substructure even if merging two optimal solutions to two sub-problems that compose to form the complete problem does not provide an optimal solution to the complete problem. This happens when there is more than one way of splitting the complete problem into two sub-problems.
For example, the shortest path problem on a DAG has optimal substructure, but if you pick two subproblems by splitting the path at an intermediate node that does not lie on the shortest path, then you will not find an optimal solution by combining these.
When this happens, one needs to optimise over all the possible ways of splitting into sub-problems. This is the basis of dynamic programming.
Thanks, it looks like I had forgotten how dynamic programming works, e.g. constructing from possibly all subproblem solutions rather than just some way of breaking into some disjoint union. In this case I guess for code optimization a "subproblem" needs to be defined. I'm not sure if just breaking the code into chunks works as I was imagining. Maybe optimal substructure applies to some graph-like model of computation but not the naive assumption I made on how this would work.
Sure, but the article doesn't define optimal substructure, nor does it give a general statement about it. It just talks about a specific implication that _would be_ true if code size had optimal substructure as I thought (but based on my comment above, it seems I proved myself wrong).
> Now, let's say you split the function in two parts and you find their minimums independently. Optimal substructure tells us that if you just merge these two, you'll get the minimum version of the whole function.
Optimal substructure doesn't tell us this. I don't know whether or not this problem has optimal substructure, but even if it did, it still wouldn't tell us this.
Optimal substructure means it is possible to construct an optimal solution from optimal solutions to its subproblems; it doesn't mean that optimal solutions to arbitrary pair of subproblems (that compose to form the complete problem) can be used to form an optimal solution, which is what I understood the quoted section of the article to be saying.
Yes I was surprised at that sentence because I think (considering theoretical properties of code size are the same as instruction count) that the main initial reason compiler optimization is non-trivial is because these kinds of global optimizations are possible, like your loop example.
Also I am really enjoying your article, still reading it with wikipedia open on the side haha.
Nice to hear that! I hope nothing else is questionable. And thanks for pointing it out! I guess the moral of the story is don't do "proofs" in your head after 3am.
With values falling into just the right registers magically?
> each single-instruction function
So the RET is also implicit?
Well, in this case taking a non-minimal function and replacing its body with function calls to single-instruction functions would indeed reduce (or leave the same) the code size because some instructions are probably repeated, and now this duplication will be removed. What's the contradiction?
In the article he says "Now, let's say you split the function in two parts and you find their minimums independently", so I am trying to think of what that means. I was thinking something like splitting "func() { ins1; ins2; }" into "ins1func() { ins1; } ins2func() { ins2; } func() { ins1func(); ins2func() }", but I agree this is an unwieldy/unrealistic example, all the register context would need to be passed in, stack memory addressing etc., and this changes instruction counts non-trivially.
So I guess I don't need to think too much of the functionness of splitting up a code-block, I suppose this example contradiction would also apply and more cleanly if just thinking of the compiler considering blocks of code individually e.g. in a greedy algorithm.
(Actually reading the intro again I think I overcomplicated this and he actually does mean just splitting into code blocks. My mistake)
I just realised in the above comment I was making a mistake in an analogy to shortest path optimal substructure. In shortest path, starting with an optimal solution, "subproblems" have optimal solutions. This is not true starting from a non-optimal path, although length-1 subpaths are optimal. But still not sure of a particular way to phrase optimal substructure for code size. Sorry for the confusion!
Nice! I want to do something similar and map my understanding of Linux. I find some diagrams on Wikipedia fascinating (example: https://en.m.wikipedia.org/wiki/File:Linux_Graphics_Stack_20..., but more to do with the user library ecosystem rather than kernel and program runtime). These diagrams make me want to learn about each part and be able to comprehend in principle what is happening on my machine. Eventually...
Any diagram by ScotXW on Wikipedia is somewhere between misleading and completely wrong, and they're a constant pain on the Linux graphics community.
If you're curious about the details in this case, ScotXW confuses EGL and OpenGL, the arrows aren't quite right, and the labels aren't quite right either (DRM is labeled "hardware-specific" but KMS isn't? The label for "hardware specific Userspace interface to hardware specific direct rendering manager" is quite weird), and some important boxes are flat out missing. It's nitpicking for sure, but when the diagram goes out of its way to add extremely weird details, it demands nitpicking.
Nobody in the Linux graphics community would draw the diagram like this.
I remember when Wikipedia first became popular, there were a lot of warnings about how you couldn't trust the information because "anyone could edit it". I feel like, at least to my level of understanding whatever I'm reading about, it's been sufficient and I've never identified something wrong/inaccurate (except for perhaps recent news or recently debated political topics). This is the first time that I've seen that downside of Wikipedia, as I use it for understanding things like this and never would've known that the diagram I was learning from was wrong. Thanks for commenting this, it's good to know
The specific article is: https://en.wikipedia.org/wiki/Free_and_open-source_graphics_.... I can understand the "multiple issues" section here, this seems super-technical in a certain way in comparison to what I usually see on Wikipedia (although wk does get very technical on very specific isolated things in e.g. math, usually non-theory tech pages are a summary), but I still found it motivating to dig into Linux. I wouldn't be surprised if it was removed.
I love wikipedia and I read to procrastinate by reading the articles for everything I am interested in so I have found quite a lot of factually incorrect information or statements of fact which are really philosophical opinions. However usually these problems coexist with a certain change in writing style (loss of formatting, grammatical errors, random capitalizations, change of tone, etc.). I find I haven't found many problems with content written in the usual "wikipedia" style, so I assume the hardcore wk editors who enforce this style care a lot about factual accuracy. However I don't read enough outside of wikipedia so I wouldn't know if everything is correct...
(actually now that I think about, maybe I am more likely to agree with things in the wikipedia style. But I think the style errors and factual errors are at least a bit correlated.)
At a second-hand bookstore I bought a biography of Simone Weil and have overheard some conversations about her work, and it looks like her work is very popular. I said I was interested in the biography because of her brother Andre the famous number theorist, and they didn't know she had a brother. Different worlds combined :)
I would say that in France, Simone Weil really is more known than André Weil! And I suggest anyone to read La Condition Ouvrière (I don't know the title in English), which is not only very instructive and moving, but also specially beautiful in my opinion.
"The Weil Conjectures" by Karen Olsson is an interesting read that connects Andre and Simone and the authors own experiences as a math major. I thought it did a good job capture the appeal of math, but it's definitely not about the conjectures or their proof.
As I understand it, the Bourbaki school was basically obsessed with the idea of making maths rigorous to the point where their published work is extremely difficult to understand and you basically already have to know the subject to read the stuff. Other people, Arnold in particular, felt that it was important to learning to gain a more intuitive understanding even if that meant sacrificing some rigour along the way, and that once you got there you could go back and sort of fill in the blanks.
That's why that wiki page says
Arnold was an outspoken critic of the trend towards high levels of abstraction in mathematics during the middle of the last century. He had very strong opinions on how this approach—which was most popularly implemented by the Bourbaki school in France—initially had a negative impact on French mathematical education, and then later on that of other countries as well.
So a pastiche of the Bourbaki type criticism of Arnold's works would be that they are elegant but skip over some of the foundational steps that are necessary to be fully rigourous and the equivalent critique of Bourbaki is that their stuff is dry, tedious and impossible to understand which just puts people off the subject entirely rather than teaching them.
Arnold is very influential especially in "Russian math" pedagogy/philosophy, which could be contrasted a bit with "French math" where Bourbaki was very influential. I think anyone interested enough to read the Bourbaki article might be interested in reading about him since his wikipedia page has a few references pointing to interesting discussions about Bourbaki's influence. I should have mentioned a little details in the original comment, sorry!
Also not to mention that he has written many great textbooks on top of his research work. I would recommend anyone in an intro or second mostly-methods-based DE course to read his Ordinary Differential Equations. He doesn't entirely avoid useful methods (as I recall) but the approach was extremely different to what I had seen before, but so natural (pointing at geometry and topology, immediately discussing vector fields and a nice notation for flows in the beginning chapter).
I love tig. I have an ad-hoc fork of it adding a new window which displays a preview of the selected file in the diff prelude section of the diff window, so I don't have to navigate through the single window. It is very helpful to browse the files in a diff easily. (https://github.com/LucasPayne/tig) I swear I will clean up the fork soon, I just implemented it in whatever way worked.
I do something similar to what you are doing with Kakoune daemon mode, but I work in the shell within a vim context. I have bound "e" in diff/stage views of tig to send a message to the current vim to open a new tab for that file at that line.
I have not tried magit yet but am going to now to at least extract some great ideas from it and somehow get it in my current workflow.
There are a few algorithms like some cloth sims based on particle positions and springs, which can be coded easily, but that was misleading to me when trying to improve, I had to dig a lot more into physics and numerical analysis then mapping the problem to code, which can end up clunky and with a lot of magic numbers.
reply