This is really cool! I'd be interested in seeing what it looks like to fully drag the image 180 degrees (but can't because the camera resets as soon as the mouseup event fires). Would you accept a small PR to the demo page which does this via a control button instead of on mouseup?
Thanks! Actually I just pushed a fix to make the drag work outside the canvas. No idea why it wasn't that way to begin with.
The really cool thing someone suggested back in the day was, to have it try to match two different images from two different angles. But I never quite found the time to tackle that ;D
Thanks! The answer is: sort of, but not really. The maker page[1] has an "export data" button that writes out the vertex data (as a big series of RGB/XYZ values), which you could easily convert to whatever format.
But if you rendered the data somewhere else, you wouldn't get the right effect unless you replicated the perspective/FOV that I used (which are completely ad-hoc because I had no idea what I was doing).
So currently there's no super-easy way to take the data somewhere else and do something useful with it.
(search "low poly" on /r/woodworking for a lot more)
DMesh is probably the most popular tool in this space that I've seen, but it's a very manual process (manually select vertexes and it fills the space with the average color).
This is great, especially the SF example as it reminds me of Brian Lotti's paintings [1]
A common technique for painters is to establish areas of colour in the scene that are similar even though the eye perceives them differently. For example the shade beneath a tree - while complex in reality - can be easily represented with a single stroke of darker colour.
Zooming in to the SF example [2] you begin to see how this scene could be painted, which makes me think this tool could be a useful guide in painting tutorials.
This looks cool. It reminds me a lot of Primitive Pictures[0], coincidentally also written in Go, which supports other shapes besides triangles as well.
Others have already commented on the utility / impressiveness of the tool itself, but on a completely different front: you also put effort into making a really nice UI. I clicked, fully expecting to fire up a terminal instance with a bunch of args or what have you. Instead I got a polished app that's a pleasure to use. I could realistically share this with my friends and family, not very common for a tool like this.
This is very, very cool. It would also be amazing as a Photoshop filter.
But separately -- I've often wondered if the pixel grid is really the best way of storing compressed image data, and if something based on triangles couldn't be viable.
Just like progressive JPEG can render broad areas of color followed by filling in details, what if you used progressive layers of semitransparent colored triangle meshes to do the same? Or at least to form a base layer?
HEVC image encoding is already a big improvement over JPEG in that fixed square blocks are replaced by variable-sized blocks. But if we got rid of rectangular blocks and replaced them with flexible triangles?
It might be computationally prohibitive for encoding, and also you'd need to find a really clever way of representing triangles and colors in a minimal number of bits. But curious if anyone here knows whether something like that's ever been attempted.
I remember seeing an experimental compression algorithm that made everything look like a painting, but I'm not sure how to find it again. It seems likely there have been other attempts at compression through vectorization.
Instead of outputting an image, can you output the points and triangles, eg as an SVG?
Could be fun to experiment with animation... especially with something like the astronaut, if you could mask the subject vertices from the background's vertices and then apply a random jostle animation to the background vertices, might be a fun trippy effect.
Thats pretty sweet shading. Love how it captures the sunlight reflecting on the buildings in the SF example without just averaging out those colors and making it look bland.
To what extent does this reduce image file size? Could I use this as an alternative to dithering for lazily loading large images on a website, for example?
axe312ger/SQIP[1] does this effectively in low-quality image loading relying on the before-mentioned fogleman/primitive library.
I think OP's project would be great to add as a new entry to the SQIP demo site [2].
In the thumbnail demo, the LQIP-custom approach (simple resize to low-res jpg thumbail+optimize jpg) approach preserves the more salient features better and has compression on-par-or-better than SQIP with lower processing times. So in my opinion the simple extreme resize+jpgoptim is preferable for thumbnails.
Thumbnails are only small part of LQIP story though and I can imagine RH12503/Triangula having much nicer results for larger images than fogleman/primitive.
OP should consider writing an axe312ger/sqip plugin.
That's an interesting idea! I suppose it could be used for compression, although my intentions were for this to be a generative art project.
I triangulated a 1988×1491 jpg using 10,000 points and managed to reduce the size to 20% of the original size, but the triangles could still obviously be seen.
Adding my voice to this. It would be an amazing tool for web work -- the vectorized images look far better than the very small jpeg images used in lazy loading today.
Similar functionality is already implemented in Boxy SVG editor [1]. There is also another more powerful "Primitivize" generator which has more options, e.g. you can choose whether the vectorized image should consist from triangles or rectangles.
It sure looks better aestethically esimov/triangle or fogleman/primitive! Goal achieved I would say.
It still would be cool to see this compared to those in Low Quality Image Placeholder implementations and find out if the extra work on nicer aethetics is preserved when the blur applied.
Thanks! The algorithm works iteratively, so essentially each iteration it makes small changes and then checks if these changes are optimal. (There are many more details described on the wiki [1])
The frames of the glasses were preserved because the algorithm "decided" that it was optimal to keep them with the limited number of points it has to work with.
This reminds me of a project I found by Antirez (the creator of Redis) recently called shapeme: https://github.com/antirez/shapeme - which uses simulated annealing: https://en.wikipedia.org/wiki/Simulated_annealing to approximate an image using triangles. It took me down a really interesting rabbit hole: "The name of the algorithm comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects."
I thought it was really interesting that a useful algorithm like this was created and possibly influenced by a natural process. I wonder if this repo uses the same type of algorithm?
Triangula uses a modified genetic algorithm which I wrote about on the wiki here [1]. The reason I didn't use simulated annealing is because I ran some tests and found it was generally less effective for my case.
Do you have a specific initialization scheme? The example gif goes very fast but it seems to find very fit individuals in the first generation which seems too good to be random.
Actually, the initialization is completely random. The reason why it might seem too optimal is because the algorithm chooses the colors most similar to the image when rendering the triangles.
Somewhat of a tangent: this project's GUI was built using Wails [1]. According to their docs, "The traditional method of providing web interfaces to Go programs is via a built-in web server." I'm not a Go developer so I'm hoping someone more familiar can comment on this. Are all Go GUIs basically Electron-lite apps?
Wails v1 (which this project uses) uses Webview for rendering it's frontend. You can read more about how Wails works here:
https://wails.app/about/concepts/
Also because I'm pretty sure you're going to be the only one to see this post, do you have any feedback on the app?
I have been in search of a good GUI library for go that works well on both Linux and Windows.
What are your thoughts on Wails?
How is the learning curve for people not very familiar with Web technologies? On that subject, does it require any webdev tools to be installed (nodejs, frameworks, etc)?
Apart of very cool visual effect, the app got my attention as an interesting example of lightweight GUI use with Go. Another rabbit hole for today. Thanks!
I'm a bit unsure of what you mean, but the genetic algorithm is used to find an optimal set of points, and then a Delaunay triangulation is created from those points.
Ah, I must've misread the wiki. I thought it meant you used genetic algorithm for triangulation instead of point selection. What's your criteria for calculating the fitness for a candidate set of points?
I was just thinking about adding another wiki page for that, but I'll give a brief explanation here:
Firstly, a triangulation is made from the points and colors are chosen for each triangle.
Then, the variance between the triangles and the original image is calculated using Welford's online algorithm [1]. The variance is computed by iterating over the pixels of each triangle and comparing the pixel color of the original image to the color of that triangle.
Lastly, the fitness is multiplied by a weight to ensure that it covers the entire canvas.
The source code may be a bit confusing because I've applied many optimizations which make it 10-20x faster.
Could this technique be used to index images for search? e.g. intuitively, the triangles and their layout seem to preseve more unique information about the image than a grid would, kind of like Vornoi partitions, or an entropy preserving sample.
This isn't my area of expertise at all, but intuitively it seems like if you treated the triangles as a graph, you could sample a minimal subgraph from an image and then search for other files that have a similar subgraph. It's conceptually like "geometric hash."
Is that what this technique is for, as it seems to have more applicaitons than just an image filter.
Note that the subgraph isomorphism problem [1] is NP-complete (also for planar graphs). There are very efficient data structures like k-d trees [2] that can be used to efficiently search for points in euclidean spaces, doing something similar for graphs would be quite difficult.
Graph canonization [3] on the other hand is not known to be either polynomial time solvable (I think it is for planar graphs though) or NP-complete and has implementations that are quite efficient in practice. It can be used to search for the exact same graph, but again extending to searching for similar graphs is probably quite difficult.
Was thinking that the adjacency matrix that describes the graph you extract from the image will produce a string you can chunk and do a fast search on, and then it's a standard similarity/distance string search in Redis, or a probabilistic filter.
The compute on the indexing images would be relatively expensive, but the speed of a similarity search would be super fast, with the caveat that I don't know how image search is done today, and it probably should do something like this anyway.
Cool. I can see this being used to determine perception thresholds. E.g., by testing priming of progressively more detailed pictures, or by blurring the triangulated pictures (triangulate a picture a bit, but not too much, then blur it in another tool, and you can make out the main shapes better; sometimes even read the text).
I wonder if you can constrain the problem a bit more: say, limit the number of triangles that can be used; limit the color palette to a small set of colors. It would make for interesting results.
https://andyhall.github.io/glsl-projectron/viewer-vermeer.ht...
(open the link and then click/drag the image)