In case anyone wants to get more artsy with their traces and are also using KiCad, here's a more hands on approach to try:
Layout your parts in KiCad, but don't route any traces. Now plot the board, but instead of plotting gerbers, plot out an SVG. Then you can pull that SVG into Inkscape. You'll get just the pads of all your components, all in the right places.
There you can draw out traces by hand, connecting the pads shown. you won't have DRC or netlist checking so this best works if you really know what you're doing, but it can be quite enjoyable. I did this back in the day with a wacom tablet and lots of smoothing on paths and you can make layouts reminiscent of hand-taped boards.
When done, remove the pads and leave just your drawn traces and then save back as the same SVG, without changing anything size wise.
Then in Kicad go to Import>Graphics and pull the the SVG, 1:1, on the copper layer, and it will be right back in the right spot, with your hand drawn traces leading right into the original positioned pads. Your drills from the pads will go through the inkscape svg just fine.
Note this workflow works with any layer and any vector graphics ideas you want.
Author here- That is a fun workflow! One of the things we've been prototyping is a web frontend powered by circuitpainter, where you can use a mouse or pen to draw freehand traces that can be rendered as 'real' KiCad traces so that DRC can work. However, since we also have the path information, it's possible to do things such as automatically place components along the paths, which would be extremely tedious to do by hand or with existing CAD tools.
CircuitPainter came out of an effort to automate the production of large numbers of very slightly different LED boards for sculptural work, where it was worth the effort to write code to generate the boards. I used KiCad as the backend specifically so that we could use our known footprints/solderpaste masks and get a DRC pass.
All these glasses have so many layers of abstraction I don't want between something I develop and the display.
Let me connect via bluetooth direct to the glasses with anything and just tx/rx via a serial port and some low level protocol to get pixels/text on the screen.
This is also the only way I'd be able to buy a pair and feel safe it won't be able to be bricked in 2 years when some company shuts their server down and ends support.
AugmentOS is open source, so feel free to self-host, or even reference our communicator code in our repo to interact with the glasses directly via BLE.
This is an inferior means of development, however.
By going through AugmentOS, you get a much easier development experience, compatibility with multiple pairs of glasses (through both iOS and Android), and the ability to run multiple glasses apps simultaneously.
https://en.wikipedia.org/wiki/STS-32 brought back the Long Duration Exposure Facility experiment, a bigass science probe the size of a small school bus.
There are still missions that are classified that could have done so as well.
It was something the shuttle was designed to do, with the 60-foot cargo bay requirement and the ability to bring back the mass it flew with coming specifically from the military.
I've heard that one of the things they wanted the shuttle to do was launch, capture a spacecraft in polar orbit, and land at the launch site within a single orbit. Some say it was so they could secretly grab a Soviet satellite right out of the sky when it was out of range of Russia, but I'm not sure how you would secure something like that in the payload bay.
That's an urban legend. There were never any plans to capture a satellite in a single orbit. It was supposed to be capable of making an emergency landing after one orbit, but not while releasing or capturing a satellite. The 1950s Air Force X-20 Dyna-Soar was intended to launch a recon satellite and land in one orbit, but not recover a foreign satellite in one orbit.
I have had insane numbers of problems being someone who travels/moves countries a lot with banks wanting their app to 2fa before allowing transactions, the app not working, my phone dying just when I am trying to pay my bill at a restaurant and getting my card declined, my ATM card also not working and then the website locking me out and requiring me to.... use the app to generate a 2fa code... from the phone... that just died.... on and on.
I was once trying to update my address so I could get a new bank card shipped to me as the old one was expired and I had just moved. I could not update my address beforehand because... I didn't know what my new address would be until my arrival in that country and my apartment hunt was successful. Once I tried to login, I got locked out, and upon trying everything I could to remember a special recovery code that I could not find written down anywhere, I called the bank and they said they would send me a new code by mail. But I couldn't get my code by mail because... I no longer lived at the address on file. They said sure you just need to login to update your address. Which I couldn't do. But they couldn't just send a code to a new address from someone over the phone, that wasn't secure.
I forget how I got out of that one.
I've basically checked out of all this at this point. I need to get off my ass and move my money better/differently because I've started to see a day where I can't prove who I am and I live somewhere where I can't walk into a physical ___location to do so. It's more than a bit scary.
To add, all the new audio models (partially) use diffusion methods that are exactly the same methods as used on images - the audio generation can be thought of as an image generation of a spectrogram of an audio file.
For early experiments people literally took Stable Diffusion and fine tuned it on labelled spectrograms of music snippets, then used the fine tuned model to generate new images of spectrograms guided by text, and then took those images and turned them back into audio via re-synthesis of that spectral image to a .wav.
The more advanced music generators out now I believe have more of a 'stems' approach and a larger processing pipeline to increase fidelity and add tracking vocal capability but the underlying idea is the same.
Any adversarial attack to hide information in the spectrograph to fool the model into categorizing the track as something it is not isn't different than the image adversarial attacks which have been found to have ways to be mitigated.
Various forms of filtering for inaudible spectral information coupled with methods that destroy and re-synthesize/randomize phase information would likely break this poisoning attack.
I have a very long and maybe ill-formed in-person rant about medical costs in the USA and the cost of higher education in the last 30 years and how these issues are are n-sides of the same n-sided coin but it takes about 8 beers to get through and isn't something I ever have the nerve to put down in an HN comment.
I'd really like to be able to just answer the puzzle without answering any of the intermediate stages. It is a much more challenging feat to just hold it all in your head and then type out the one long answer than to answer the individual stages. It promotes some real mental modelling skills that way.
The game, played as-is, is almost no challenge at all. It just feels like busywork.
As soon as I saw "second rock from the", I knew the top-level answer was going to be "Venus". At first I felt frustrated that it wouldn't let me enter the solution, but if it had accepted that then I would have missed out on the rest of the puzzle. In retrospect, I felt like having to work backwards from "Venus" and "sun" to figure out the lower-level clues was much more interesting then if it had just let me skip those clues.
I'd really like to see this coupled with some SLAM techniques to essentially allow really accurate, long-range outdoor scene mapping with nothing but a cell phone.
A small panning video of city street, right now, can generate a pretty damn accurate (for some use cases) pointcloud, but the position accuracy falls off as you try to go any large distance away from the start point, dude to the dead-reckoning drift that essentially happens here. But if you could pipe real GPS and synthesized heading (from gyros/accel/megnetometers) from the phone the images were captured on into the transformer with the images, it would instantly and greatly improve the resultant accuracy since it would now have those camera parameters 'ground truth'd'.
I think then this technique could nearly start to rival what you need a $3-10k LIDAR camera to do right now. There are a lot of 'archival' and architecture study fields where absolute precision isn't as important as just getting 'full' scans of an area without missing patches, and speed is a factor. Walking around with a LIDAR camera can really suck compared to just a phone, and this technique would have no problem with multiple people using multiple phones to generate the input.
I guess that is one way to get a premade, high-displacement, powerful enough voice coil in an easy to mount package at a normal speaker impedance. Genius.
I've referenced this paper many times here; it's easily in my top 10 of papers I've ever read. It's one of those ones that, if you go into it blind, you have several "Oh no f'king way" moments.
The interesting thing to me now is... that research is very much a product of the right time. The specific Xilinx FPGA he was using was incredibly simple by today's standards and this is actually what allowed it to work so well. It was 5v, and from what I remember, the binary bitstream to program it was either completely documented, or he was able to easily generate the bitstreams by studying the output of the Xilinx router- in that era Xilinx had a manual PnR tool where you could physically draw how the blocks connected by hand if you wanted. All the blocks were the same and laid out physically how you'd expect. And the important part is that you couldn't brick the chip with an invalid binary bitstream programming. So if a generation made something wonky, it still configured the chip and ran it, no harm.
Most all, if not all modern FPGAs just cannot be programmed like this anymore. Just randomly mutating a bitstream would, at best, make an invalid binary that the chip just won't burn. Or, at worst, brick it.
In case anyone wants to get more artsy with their traces and are also using KiCad, here's a more hands on approach to try:
Layout your parts in KiCad, but don't route any traces. Now plot the board, but instead of plotting gerbers, plot out an SVG. Then you can pull that SVG into Inkscape. You'll get just the pads of all your components, all in the right places.
There you can draw out traces by hand, connecting the pads shown. you won't have DRC or netlist checking so this best works if you really know what you're doing, but it can be quite enjoyable. I did this back in the day with a wacom tablet and lots of smoothing on paths and you can make layouts reminiscent of hand-taped boards.
When done, remove the pads and leave just your drawn traces and then save back as the same SVG, without changing anything size wise. Then in Kicad go to Import>Graphics and pull the the SVG, 1:1, on the copper layer, and it will be right back in the right spot, with your hand drawn traces leading right into the original positioned pads. Your drills from the pads will go through the inkscape svg just fine.
Note this workflow works with any layer and any vector graphics ideas you want.
reply