Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the most difficult tech/dev challenge you ever solved?
209 points by pauletienney on April 27, 2014 | hide | past | favorite | 173 comments
I feel I just make some CRUDs. It's fine, since they useful for my customers. But they are not technical challenges. So please tell me yours.



I got handed a custom mp3 encoder and asked if I could figure out why the output was too quiet. At the time I had essentially no DSP experience.

It seems the gain had been reduced to cover up another problem: a tonal hissing sound. Once I learned how polyphase filter banks work, I tracked the problem down to a premature optimization, namely the replacement of an integer divide by 2^n with a right shift.

Such a shift of a negative 2's compliment integer rounds toward negative infinity instead of zero. This caused a slight DC bias within each sub band filter. In all but the lowest band, this DC bias gets shifted up to a non-zero frequency.

I call the optimization above premature because fixing it only added one cycle per operation. Granted, this was a real-time MP3 encoder on an ARM7, but the cycles were there.


Oh, and the code comments were in Dutch, which I don't read.


That probably helped.


I'm guessing it was at Philips?


This was a small consumer electronics company no longer in existence. The encoder was written by a quite clever dutch hacker, whom I very much enjoyed working with.


At a different gig I built a linux x264 batch transcode cluster that accepted, among other formats, Apple ProRes. (Quicktime under wine using xvfb, through avisynth, piping a yuv4mpeg stream out of the wine ___domain).

At a yet another gig I developed a method to differentiate male and female insects at 30m using a beam-steered high speed low res camera.


Would that be the mosquito laser?

https://www.youtube.com/watch?v=eYXPqrXZ1eU


I was on the first Tiger Woods PGA team for the first PlayStation (the one that had the South Park 1st episode hidden on the disc.) The PGA source was legacy code, having been ported and rewritten for every console to date. It was a serious rat's nest, with the compiled code far too large to fit into memory, so EA had developed their own code segment loader to enable their too large to fit executables to run on the consoles. I was put in charge of the "menu front end", the statistics tracking, and some of the AI logic. It was over a week just to read the logic and figure out WFT. The designs I had to implement were simply not possible with the existing framework, so I started over. I wrote a series of small finite state machines, fully documented their use in the source code, and then replaced the entirety of all the portions of the source code I was in charge of with my miniscule finite state machines and their paltry data. The segment loader was no longer needed for the front end, because I'd left 800K free (on a 1 MB system!). I spoke with a Tiger Woods PGA developer about a decade later, and my code was still there being used. And a year later, EA had me do the same thing to the AIs for NCAA Football, where I finite state machined their AIs, clobbering the memory required down to about a 6th of what it was previously.


Thanks for the story. I would be very interested to know how games "AI" are built. Is it very scripted ? Is it more organic ?


Normally very scripted, but getting better recently.


Good job. I'm astonished at how just about every developer I have run across doesn't understand how to program with state machines, opting instead for a horrid rat's nest of millions (it feels like) of state variables, functions that modify different but overlapping sets of those variables, etc.


Interesting...would you be able to give an example of the state machines you built and how you used them?


Well, "difficult" usually goes away when you have lots of time, so I have to add ship pressure. If you have time, then hard goes away.

Making the Newton data stores stable, about three months before we shipped. The Newton had a flash-based object store instead of a file system, and the code was such that a power loss or a reset in the middle of an update would toast your data. I spent about six 100+ hour weeks writing a transaction system to make sure that users wouldn't lose data to a crash or a dead battery. I think I had to fix only a couple of bugs in that code, after a massive checkin.

Then, making the audio pipeline for the Kinect stable enough for noise cancellation to work. I'd heard that doing isochronous audio was hard, and "yeah, yeah, sure," but I had no idea it was really hard until I'd shipped a system that used it, with tight constraints around latency variance. I worked with some really good people on this, and there were days when we were looking at Xbox hypervisor code, application code and even DMA traffic on the camera itself. Another three or four 100+ hour weeks, maybe three weeks before we shipped, changing scary stuff everywhere. I still remember a satisfying feeling when I discovered the exact buffer we needed to use as a clock root for audio (and it wasn't the obvious one).


And while you were doing that, I was finishing the indexed object store on top of that transactional layer. I spent at least a week starting the stress test, going to bed, waking up, reading the test log and fixing bugs, starting the test, going to bed... (Shouldn't have started with someone else's half-baked B-tree code!)

Somehow the Newton team managed to do a whole stack of things like that in a ridiculously short time, and then shipped them in read-only memory -- not Flash, kids, but for-keeps permanently masked ROM -- and it worked. I've never seen anything like it since.


That was fun. Let's do it again :-)

[That same B-tree code was powering our source code control system, IIRC. Every time we hit the next power of two on the database size something else would explode. Wheee.]

Patching the mask ROMs was fun. Nope, you can't change those things, but we had a bunch of bugs to fix between the time the ROMs taped out and we actually shipped. We couldn't afford a full ROM jump table (the newt had 4MB of ROM, 512K of RAM, and we ran the heaps mostly full and had essentially no memory left), so we randomized the jump table and played aliasing games with page mapping to get our patch budget down to 20K or so. It helped to have really good engineers -- people who really got into the swing of twisted, devious code -- write the patches. I wrote a couple of fixes, and had done a pretty good job, I thought, then handed them to Andy S. who always came back with something about five instructions long that worked better and maybe fixed another bug, too, with a bonus cackle of maniac laughter.

When I started, Newton was an expensive researchy pipe-dream. A year later we'd done a complete reset and shipped product it. I don't know how we did that.


Made an autonomous vehicle out of a VW Golf/Rabbit, in around 2 weeks.

Custom built hardware, including the actuators.

Custom built RTOS on micro nodes.

Driving via openCV, hough transforms for lane detection, stereovision and flows for obstacles, surf/etc for traffic signs(don't ask, I was learning)on a stack of 2 laptops connected via gigabit ethernet.

Nicest thing: I got the models trained mainly without moving the car, by pumping the framebuffer of two racing/car games, rFactor and GTA3 through glcs to openCV and controlling the games via uinput to make a virtual city to drive in.

Don't have a lot of pics, here's some HW:

http://imgur.com/9cfzbMv ( yes, that's an old cordless drill and an angle grinder head with a bespoke bike chain :) )

http://imgur.com/5x9T9gi (piston is weirdly offset so I could still put my foot on the brake in emergencies, like that one time i smashed it into a fence...)

http://imgur.com/Ig7MaLT (notice how I kept the costs down to almost nothing, in this case the air distributor for the brakes made with Meccano and old air valves and a geared motor, since I didn't have the funds to buy anything)


This is awesome. That would be so cool if you described the entire process in a blog post :)


That's exactly what I'm thinking, but it will take 1-2 months, right now I'm 1400 km away from that stuff.. Stay tuned, I guess.


I think it was around 1992/93, I was deep into graphics programming for games. The conventional way was to first draw the background and then draw everything on top of it (i.e. sprites etc.). However, this is wasting a lot of bandwidth since you can end up writing to the same video memory ___location multiple times.

I came up with an algorithm treating the each scan line of the screen as a binary tree which allowed me to keep track of which part of each scan line was already written to, meaning, I was able to build the screen up from front to back and visit each video memory ___location only once. So, on a 320x200 screen only ever 64000 bytes would be written to memory. With all the clipping etc. this was quite a complex beast and fully written in 286 assembler. In the end I think it made the overall graphics rendering about 20% to 25% faster.

Edit: I don't have the code anymore. I lost all my "floppy disks" in a house move... :(


I think Quake did something similar. http://www.bluesnews.com/abrash/chap67.shtml


This sounds like an interview question :)

In my case I solved a problem in genomics that people had been trying to solve for around 20 years and which would have saved the human genome project billions of dollars - the only problem is I did it 10 years too late :(

Edit. If anyone is interested I published the method in BMC Genomics a few years ago http://www.biomedcentral.com/1471-2164/10/344


Here is a big kudo : )


Thanks. I have always wondered why nobody else came up with the same solution or even something similar. The idea used nothing that had not been around since the 1970s so it should have been invented by someone else long before I had the idea.

For the non-molecular biologists I should explain how it would have saved billions. The human genome was effectively sequenced by breaking it millions of random fragments of around 1000 bases (letters) and the sequence (order) of the bases in each fragment determined. In order to be able to put the fragments back together in the right order each base was sequenced 10-15 times in different fragments. My idea allowed you to avoid all this redundancy meaning you had to only sequence each base once or twice. I did some simulations based on the actual costs of the human genome project and it would have saved 80% of the costs and finished it 3 years earlier.


Had a VoIP network with TB+ of SIP packets each day. Customers demand resolution to problems from days ago, so having a time-travelling, content-aware PCAP was necessary. At 50K packets/sec, piping tshark to mysql and 11 indexes simply wouldn't cut it. We spent $$$$$$ on a commercial system that didn't work so well.

I slowly reinvented the basics of an information retrieval system (the curse of not having taken CS classes). Came up with the idea of a log-structured merge tree, made easier by this being a write-once database. Got some inspiration from the original Google paper. But most of it was just figuring out the least number of actions needed to retrieve info.

I published the core DB part, which maps an int64 (index hash value) to an int64 (docId) and stores in an efficient format (on our data, ~2.3 bits per packet). http://github.com/michaelgg/cidb - I couldn't find an existing library that has zero/low per-record overhead.

On a Q6600 and a single 7200RPM platter, I was able to index a TB of SIP a day and provide fairly quick flow reconstructions going back as far as disk space allowed. On a quad-core i7 parsing+indexing was over 1Gbps.

Company impact was huge, because we could suddenly troubleshoot things in minutes instead of hours. A few years later, after I was gone, I heard they were still using it. Neat.

This was all in F#, which presented fun challenges regarding optimization. Lots of unsafe code and manual memory management. SSE would improve varint encoding - the CLR generated code is a joke in comparison.

Last month, I dropped this lib in as a replacement for storing routing information in a telecom app and dropped RAM requirements from 6GB to 1GB.

On the downside, I'm sure any compsci student could build a similar thing in a week, and probably they do so for school projects. But to a lot of app-level developers, this kind of algorithmic work is sorta magic for some reason.


Sounds like a great tool -- I would certainly use something like this during my telecom days. At the end, did you still use tshark to pipe input data to your database?


No, I wrote a fully custom SIP parser and database format. If I was doing it again, I'd look at stealing a SIP parser from another project. Scale out and don't worry about every CPU cycle.

I thought about commercializing it but interest seemed lukewarm.


But how did you capture the actual bytes on the network (that you would later parse as SIP)?


libpcap. For my first pass I just setup a mirror port (SPAN) port from a core switch. libpcap has efficient filters so non-SIP (non port 5060) traffic was just dropped.

Looking forward: Intel 10G NICs have little processor in them, and there's a library called DNA (Direct NIC Access) that bypasses kernel mode for packet capture/transmit. Makes it super efficient and possible to do line-rate 10G apps on commodity hardware.

Those processors can also distribute packets to specific cores, which helps solves the next problem of scaling the capture part beyond a single core. Looking beyond 10G to 40G and beyond, I'd imagine something similar to Etherchannelling would be a cheap way to leverage existing hardware into breaking up the load. Unfortunately a low-level "dumb" load balancing system will split up dialogs so your indexing and compression gets a bit less efficient.

One of my semi-open research projects is to do line-rate SIP DDoS protection. So far, no one (not commercially, not academically) has any magic bullet and it would appear that almost every SIP network out there is trivially DoS-able.

State-of-the-art is along the lines of "we push a whitelist into our switches" (which, actually, technically doesn't even work with SIP since multiple IPs can get involved with the signalling, IPs unrelated to original hosts - although I've only seen this once or twice in production).

As far as I can tell, it's mainly an engineering problem - just do the grunt work of writing smart code. But there's no market for it until VoIP providers start suffering real serious pain - until then, I don't think telecom cares about security, DoS or otherwise.


I'm a senior compsci student at one of the largest (by # of students) universities in the nation. I assure you, most of my classmates could not do such a thing in a week!


- At age 12, I taught myself C programming using [1] (I had never programmed before, but wanted to make games), then proceeded to write a 12,000 line 3D OpenGL Game Engine from scratch, using the NeHe tutorials [2] as guides. It took me three years. The final program ran on Windows XP and 98, could import 3D models from Autodesk 3D Studio, and had a 3D asteroids style game demo. I used Milkshape [3] for 3D modelling, and Dev-C++ as my IDE [4].

- For an AI course at University, my partner and I developed a custom motion planning algorithm involving neural networks, RRTs [5] and POMDPs [6] in several thousand lines of Java. That was some of the craziest (and most fun) programming I've ever done. Our lecturer was Hanna Kurniawati [7], who is world famous (for some value of 'world') for her work on POMDPs, which was really cool.

[1] http://www.cprogramming.com/tutorial.html

[2] http://nehe.gamedev.net/

[3] http://chumbalum.swissquake.ch/

[4] http://www.bloodshed.net/devcpp.html

[5] http://en.wikipedia.org/wiki/Rapidly_exploring_random_tree

[6] http://en.wikipedia.org/wiki/Partially_observable_Markov_dec...

[7] http://robotics.itee.uq.edu.au/~hannakur/dokuwiki/doku.php?i...


Lol, funnily enough, I did pretty much the exact same thing regarding learning C at the age of 12, ended up writing an OpenGL game for the PlayStation Portable which I released for the Neoflash competition:

http://www.neoflash.com/forum/index.php?topic=4924.0

https://www.youtube.com/watch?v=wAF9o8dsHfA

Since then programming games was always my obsession, and just a few months ago I fulfilled my life long dream and got a job as a gameplay programmer at Ubisoft.


I did the same! https://www.youtube.com/watch?v=l9yyLlstPtE was my first C program, unfortunately I don't have a job at Ubisoft, I didn't continue with game dev but went for web.


Wow, the world is super small sometimes! I never though I would meet another person who participated in those competitions ;-)


That's awesome! And congratulations on the dream job!


I made a bunch of LCD nametags for a project a while back[1], and ran into the strangest issue. Occasionally, the display would fail to start up and would display an all-blue screen.

My initial hunch was that it was a timing issue, and that I was seeing different behavior based on temperature. Even when I made things super slow to exclude timing, the performance was inconsistent. Next on the list was excluding race conditions (maybe I'm not resetting in the right order, and getting lucky?)

At some point in time, it was 8am after an all nighter after I'd been debugging this, and I hadn't been able to reproduce the bug. Lo and behold, when I give up, the problem starts occuring, just as I open the blinds to get some sun.

Turns out that the display driver had light sensitivity issues. Since it was a cheap display[2], the backside of the driver IC was exposed (the epoxy fill didn't encaspulate it all the way, just the edges; you can see it as the white strip in the Digi-Key picture).

Putting a piece of tape over the IC solved the issue, and I didn't run into problems with the display again.

[1] PCB (business card sized): https://bitbucket.org/cyanoacry/ditch_day/src/3bf75f6bd2fba1...

Hardware picture: http://www.albertgural.com/blog/caltech-ditch-day-2013/image...

[2] http://www.digikey.com/product-detail/en/COG-C144MVGI-08/153...


Interesting. A while ago I designed an OLED display shield for Arduino. Those displays also have a controller IC bonded to the flexible PCB and encased in transparent epoxy. I never noticed any sensitivity to light.

http://www.tablix.org/~avian/blog/articles/arduino_oled_shie...

I do remember that they were however pretty picky regarding the reset sequence (something the datasheet warned about several times).


I've done quite a bit of stuff on the hardware/software boundary where things get hairy. I think the nastiest thing I've debugged there was a machine that HP [1] was making in about 1994 which had some native HP bus and an EISA bus hanging off it for expansion cards from PCs.

I was working for a company that made NICs which went into the EISA bus and we were seeing data corruption in this machine.

After a long, cold night in the Apollo works myself and an engineer from HP tracked it down to a timing problem on the EISA bus where the 16 bits being sent were arriving in two 8 bit chunks slightly delayed. Our NIC was spinning on a 16 bit word looking for a change in the top 8 bits as a single that the data had arrived. We'd then read all 16 bits, but 8 bits weren't ready yet.

Luckily, HP made lots of test equipment and getting a logic analyzer with 32 inputs, an in circuit emulator for the CPU, and some logic probes was easy...

[1] I think it was an HP 9000 http://en.wikipedia.org/wiki/HP_9000 being made at the old Apollo works outside Boston.


Had many terabytes of footage shot of events that couldn't be recreated with cameras that turned out to have broken firmware. When reading out the sensor the ADC would get out of sync with the shift register, resulting in adjacent pixels merging into each other or being skipped in a different pattern on each frame. Resulted in appalling pictures. I figured out how to re-bayer the image, then using some frequency-___domain magic on a picture interpolated from the green photosites only, determined the shifting pattern in various areas of the chip. We could then automatically gain individual pixels up and down and remove the effect, resulting in a perfect image. Went and saw the fnished film at the biggest cinema I could find and didn't spot a single pixel wrong. Dread to think what might have happened if I'd tried to fix it in a more traditional "just paint it out" manner...


Writing distributed and highly-concurrent software in Erlang to scrape Google's search results, with localization, tens of millions of times per-day.

I don't work on that stuff anymore, but it definitely challenged my problem solving abilities to:

A) Learn Erlang. B) Learn how to write solid distributed software in Erlang. C) Figure out how to work around Google's temp-banning policies using IP balancing, captcha solving to produce cookies to balance connections on, and also how to localize the searches for geographic accuracy.

I don't like working on projects that are actively pitting me against someone though, so I'm happy to not be working on that. I now write scalable software for my energy-focused startup, we receive energy data from homes in near-real time, which has its own challenges.


Sounds very challenging, yet also very black-hattish on a massive scale not that I have much love for Google.

That is the problem that many of the most interesting projects have some moral ambiguities(military, financial, etc, etc).

While one is getting paid, it is very easy to justify or not even think about where the money comes from(que Sinclair quote).


Not quite black-hat; I consider tiered link building to be more blackhat.

Scraping the search results was about getting data on where items were positioned in the index, not pumping spam into the search results. Although to be clear, I did help do stuff like that for a while too but it never felt good and we vastly preferred the rank tracking product to link building services.

Also, Google's getting extremely good at combating search spam.


> I don't like working on projects that are actively pitting me against someone though.

I'm surprised you didn't also didn't mention the ethical problems with trying to take something (search results) without permission.

Your new project sounds awesome, though. : )


Google doesn't exactly tie itself in knots that they built a few hundred billion dollars on top of "We're going to crawl the entire Internet and datamine it for our own purposes. Nobody will agree to this, so we won't ask them. Instead, we will offer easy ways to opt out after having achieved hegemonic control of Internet navigation."


Though to be fair, that's how all the pre-Google search engines worked too.


There have been easy ways to opt out since day one. Google has always respected robots.txt.


Meh. I didn't have any qualms with that, per-se. As others have pointed out I think there are ethical concerns with what Google is doing itself.

Also it's indicative of something if people are building businesses to scrape something they could offer as a pay-for API that many people would gladly / happily pay lots of money for. The arguments for NOT doing that are ridiculous because people will get the data regardless.


Implementing the Solar Thermal calculations from "The Government’s Standard Assessment Procedure for Energy Rating of Dwellings - 2012"

http://imgur.com/a/LiCxU ( a tiny part, full thing is 172 pages and I needed about 35 of them).

I'm developing software to help the MCS accredited renewable installers in the UK, I planned to buy the API that does the calcs in so I duly purchase it and do some quick testing...get the documentation and oh oh this doesn't match the real thing!, ring them up "oh yeah we are getting out of doing the API as our competitors are using it against us".

Oh shit.

I'm not a mathematician (I got a B in my GCSE Maths for Christ's sake) and now I have to implement code that works out the solar irradiation using tilt, latitude, solar_declination, a dozen look up tables, some hairy trigonometry

Stuff that looks like this :-

    A = k 1 × sin3(p/2) + k 2 × sin2(p/2) + k 3 × sin(p/2)
    B = k 4 × sin3(p/2) + k 5 × sin2(p/2) + k 6 × sin(p/2)
    C = k 7 × sin3(p/2) + k 8 × sin2(p/2) + k 9 × sin(p/2) + 1
Quite frankly where it not for iPython notebook allowing me to convert the math to python and play with it to figure out what was going on I don't know if I could have figured it out.

As it was it took me two weeks to implement (about the most stressful two weeks of my life).

I'd love to post the code but it represents a significant amount of work and there are competitors in the market and that would be a big hand up to at least one I know off.


> I'm not a mathematician (I got a B in my GCSE Maths for Christ's sake) and now I have to implement code that works out the solar irradiation using tilt, latitude, solar_declination, a dozen look up tables, some hairy trigonometry

Same here, I got a B at GCSE maths. When I was 23 I enrolled for maths A level at the local college, I got an A. 16 years is too young to assess someones potential.


When doing computer vision the first thing being done is usually camera calibration. The most common calibration technique is to make images of a physical object with know spatial properties (grid patterns are often used) and to extract 2D point positions from the resulting images (e.g. using a corner detector). Given the match between the extracted 2D points and the corresponding locations on the calibration grid it is possible to determine camera and lens parameters such focal length and distortion.

An in-house camera calibration application at my company solved the correspondence problem by using an algorithm that looked at properties that are not invariant under projection (i.e. angles and distances). This made the calibration process extremely fragile. The algorithm often failed to detect the calibration grid even when the image was crystal clear, which made calibrating a camera a very frustrating endeavour.

Since it was an in-house application there was never much priority to fix it. Eventually I got so annoyed though that I wrote a whole new real-time algorithm for the detection of arbitrarily-sized grids from a set of 2D points. The algorithm is capable of dealing with a significant amount of outliers (even when between the valid grid points), it can handle missing grid points and it is not affected by perspective or non-linear lens distortion. In the end it took me longer that I had hoped, but the resulting algorithm is one that still makes me proud.


First to be clear CRUD makes the world go around! I always find it interesting when people in our industry somehow look down at work that makes users happy and gets us all paid so we can do other things. :-)

I've been in tech for more then 30 years. In that time I've done so many things it's hard to pick just a couple interesting ones. I think some of the fun ones are really old but even a couple are very recent:

* In the good old days HD's would die and people would bring them to me hoping I could revive them. I've swapped logic cards with working HD's to recover data. One disk was soaked from a fire sprinkler system in a small business' office. I took it apart and rinsed it with distilled water, cleaned up a bunch of parts by hand, swapped logic boards and applied lubricants to various parts and was able to spin it up just long enough to back up the customer's data. Don't try this at home with modern drives, get a recovery team to help you if the customer can afford the cost.

* In 1988 I was approached by a military contractor with a GPS board they built that needed to have a device driver built for SCO Unix. They wanted 50ns realtime responses and I had to keep explaining to them that SCO Unix was not an RT OS but they were cornered into the OS by contract and had no choice. So I pushed on them "why this strict timing" and after many signed documents and stuff I don't really want to know I showed them we could build a device driver interface that allowed them to achieve their needed result. After many tests it turned out we were able to beat their timing requirements. The contractor was very happy, I went as far as to make it a SCO Install disk etc, and they were able to make the install part of their build process. The strange part was two years later I got a call from the contractor, they were in a panic because the driver didn't work with the latest version of SCO and they had to "urgently deploy a lot of these things" into a undisclosed "middle eastern territory". I told the guy I was really busy (I was) even after that the next day he showed up at my house (literally my house) with a machine in his car and a blank check and begged me, he said his job and a lot of others were on the line and they didn't have time for someone else to come up to speed and make the adjustments. I caved in, I updated the drivers over night, it wasn't too bad just some changes in the kernel interfaces and SCO had made some dumb changes to the way installs worked etc. I gave him the results the next day late in the evening and he thanked me and drove off into the sunset. I was never told exactly what they were used for... I've always wondered.. And no I didn't burn them I charged my regular rate, but I did work about 18 hours on it in a 24 hour period.

* In recent years I've taken to building low-latency high scale systems. One such system must respond to upwards of 1.6 million queries per second across six data centers and the response must be received by the requestor within 10ms. The reality is with jitter even on local networks you really have about 7ms to respond. I wrote this system in Python, Java, C, C++ and Nginx/LutJIT. Each time I re-implemented the same solution but new tech with twists to leverage the strength of the underlying tech (Long living objects in Java to avoid GC, Cython, etc.) and my best implementation ended up being nginx/LuaJIT. I was able to get about 15k qps/core using this configuration and it was rock stable, running for weeks without need of a reboot. The best part is I've been able to publish the system internally with all the system settings (lots of network tweaks) and a script so others can deploy it and do their own testing. Previously everything was C with Libevent and it's just painful trying to get a large pool of people up to speed on using that to do their projects. Most recently I've re-written this system in Go and am working through some crazy performance issues there. I can't seem to get Go's scheduler to react as quickly as Nginx and often times it seems to latch on to just four CPU's even though GOMAXPROCS is set to 8 or more.

I could go on for pages about all the other things I've done but, the underlying thing about them all is problem solving at a level of detail where most people give up. I often say I'm not very smart; I'm just really persistent. I'm willing to change just one thing, retest, tweak another, retest and onward until the problem starts to present itself. I often find people give up too soon, they think something is impossible or they're scared of how much time it will take to find the solution. For me if I see forward progress and I have the intuition that what I'm doing will work I keep pushing until I can disprove my intuition or prove it. I think the sign of a good technologist is less about how super smart they are and more about how they approach solving real world problems. I find it annoying when someone tries to get me to do some puzzle for an interview or other thought experiments that have little basis in reality. That said, if someone asks me to solve a real world problem in an interview I'll jump up to the whiteboard and tear it to up with great passion.

Don't belittle yourself because you're doing CRUD to pay the bills. Instead challenge yourself to do more when someone isn't paying the bill. All my life I've worked and played in with technology and most people can not tell when I'm working or playing. I am always pushing to learn new things. As the example above shows, the system I built is working fine why do Go? Why not? In 30 more years all the languages will change, all the tech will be different but the problems will be related to today's problems and the more you learn to stretch your mind and solve problems with many different approaches the more valuable you will be in the complex future that is coming at us every single day.


I loved reading these stories, and your writing style is excellent! Also, your approach to problem solving is a great reminder of how persistence is the underlying key. Please do write more if you can, it was such a pleasure to read. I'd pay $10 for a book full of your stories - exactly how you wrote it above.


Me too! You sound like a great guy to work with!


These are the types of experiences I like to read about, you should blog about them in depth some time.


It's hard to make the time to blog when I am actually doing stuff that fires me up.

Maybe I need a ghost writer who can shadow me on projects and then write them up for us both. LOL


A Watson of your very own.


+1 for your awesome stories. another +1 for your defense of CRUD.

too many people look down on those doing the "grunt" work, despite the fact that it is usually necessary.


Your mention of nginx/luajit piqued my interest as we've developed a system based on that, coincidentally with those same 7ms/10ms constraints.

I was just checking your profile for an email address when I saw your employer on your LinkedIn profile and suddenly it all makes sense ;)


I have shared this implementation with a number of our client's not sure if your team was one of them or figured it out based on other inputs.

Some of the big learnings where sysctl settings and other os level tweaks.

My primary goal was to have customer reference implementations in many different languages. That said the more I do this the less I'm convinced there will be a broad range of tools that can reliably handle the constraints.


>>Some of the big learnings where sysctl settings and other os level tweaks.

I would love to know more about this - care to write up an article or something?


Wow 50ns gps response in 1988!?! What was the bus used for communication? I am surprised the jitter alone did not kill the 50ns requirement.


I got the impression that he steered them away from their stated need of 50ms, and determined their actual need, which he was able to meet.


50ms = 50000000ns


Right, thanks for catching the typo.


No Typo it was 50ns read the other parts of this thread.


Even now, crossing PCIe3 would cost you ~250ns.


They needed to fire the pulse on the edge of the card's port within 50ns of a pre-determined GPS time. However they originally wanted this pulse to originate in the Kernel. I told them that would never happen on time and between bus jitter and kernel scheduling we could never pull this off.

I dug deeper and found that the application could figure out 5ms in advance the GPS time code of when it needed to fire, With that in mind what they really needed was the ability for the firmware to have a time trigger that was programmable through the device driver from the application. That way the dedicated board would fire the pulse exactly when the application wanted it fired.

Basically I turned their logic on it's head and it fixed the problem. The firmware guys and I met, we developed a protocol a couple days later they gave me a new board and I had already written the driver changes and a sample client.

I guess the underlying point of the story is just because someone asks you to do something impossible doesn't mean the problem isn't solvable. They may be coming at you with incomplete information and you need to push to better understand the problem. Once we were all on the same page it took a firmware rev and some small changes to their application and it was a success.


I was going to post some of my stories but they simply pale in comparison to yours.

I agree about the writing style also.. have you heard of Leanpub? Pretty sure you could make a small book that would be tons of fun to read.


Yes! Excellent, not the least for noting that CRUD does make the world go around. It's been done a million times but that doesn't mean there's not still room for significant innovation.


You meant SCO Xenix right?


Of course but nobody really remembers Xenix. :-(

I get tired of explaining Xenix to people so I guess I just defaulted to SCO Unix in this story.

My favorite Xenix tidbit:

"Microsoft, which expected that Unix would be its operating system of the future when personal computers became powerful enough,[3] purchased a license for Version 7 Unix from AT&T in 1978" - http://en.wikipedia.org/wiki/Xenix

I used to laugh every time I'd boot a Xenix machine and see the MS Copyright.

-k


50ns in 1988?


A few years ago I had to create a nearest neighbour lookup algorithm that had to perform 2D searches in a microsecond with 16 million points (k-d trees and the like didn't cut it).

I spent a lot of time reading books and papers on computational geometry. I had an idea that involved a few minutes of precomputing things, and eventually came across a useful algorithm in a paper that let me implement this as I envisaged. In the end, everything worked perfectly. It was very satisfying.


Do you happen to remember the paper? If so, would you link to the PDF? It sounds really cool!


The paper was only an algorithm for part of this — I can't remember the paper, but it gave an algorithm to find all of the closest points to a line.

How my algorithm worked was to break the search space into a 2D array of much smaller squares. In the initialisation phase I put the points in each square that are closest to any search point in that square (some points were in multiple squares). Therefore, when a search was run, the square the search point falls in is looked up, and the list of twenty or so points was looked through for the closest point in a slightly optimised way (no square roots here).


Sounds like an R* tree or some variant of that?


That sounds very interesting. If you don't mind could share the algorithm name?


It was a problem personal project I did a few years ago. I was writing an editor for ship hulls that would allow a non-technical user to model a ship hull out of 3 sets of bezier curves - one for the side profile, one for top-down, and one for front-back. Pics at bottom.

The hard part was taking those 3 sets of bezier curves and turning it into a 3d mesh. There's no pleasant mathematical way to do this directly, and there's no way to convert the 3 sets of curves into a 2d bezier surface.

The eventual solution involved several steps - first the top-down curve was rasterized into points at intervals of N on the X axis (from front of hull towards back). The maximum distance between any two sets of symmetric points was used to "scale" the front-back view so that the endpoints of each copy of the front-back view would match each set of symmetric points on the top-down view. At this point, each set of symmetric top-down points has a matching front-back curve that connects the two points. Now each front-back curve is rasterized at intervals of I on the Y axis (from port to starboard).

At this point I have all the points I need and could actually rasterize them into a mesh, but with one problem - the side-view curve still isn't accounted for. If I were to rasterize it at this point, the ship would probably look like a bullet cut in half.

So to take the side profile curves into consideration, the side view was rasterized like the top-down curves were, into points at intervals of N on the X axis. These points are converted into proportions (from 0 to 1) of how far they are from the top deck relative to the deepest point on the side-profile curve. Finally, each proportion was multiplied with the Z components of each point on the point-rasterized front-back curves. In this way, the side-profile just acts like a "scale" to how deep the front-back curves are allowed to go.

I was pretty happy with the results - however the mesh had densities in bad places that I later smoothed out using bicubic interpolation. I don't have any pics of final product, but here's some of it before the interpolation phase:

http://i.imgur.com/PqX1aWL.png

http://i.imgur.com/se9YdoO.png

http://i.imgur.com/oK4taXA.png


Video game related. One monday my boss came and asked me to rewrite the pathfinding library recast [1] in ActionScript (we could not use alchemy to compile it from c++ to actionscript because a bug made the AIR compilation to iOS not to finish with any alchemy code in it).

His approach was to translate the code by hand without understanding much - a monkey me? He told me I have approx. 1 week. I look at the code and see dozens of optimized c++ code - most of the features wasn't require for us however. I've quickly understood that I would have to figure out the core algorithms and implement them if I wanted to finish on time. The problem was: when you read the code of a complex algorithm in c++, some part can be so complex that you can spend a lot of time just to understand barely how it works. And I have 5 days, 8 hours a day, and the 3D game was waiting.

So I took 20 minutes, and succeed to figure out how to make a navmesh-based pathfinding library, on a piece of paper without even reading recast's code. The hard part to figure out is a portal-based algorithm and also to succeed to get error-perfect 3D floating-point geometrical approaches to avoid nasty corner cases which causes bugs on the position of the main avatar of the game. At the end of the week, our 3D game was running with the new library and I did not make more than 1 hours of overtime each day in average. I felt classy =) It has worked during the two years I was there and games has been shipped with it.

[1] https://github.com/memononen/recastnavigation


Not the most difficult, but one of the most satisfying...

Early 90's (when C++ was new and broken in a different way for each compiler), I was on a QA team for a project where developers were learning C++ as they built the next version of the product. There was a buggy math function in the standard lib, and the compiler vendor didn't see it as a high priority. Devs didn't know how to find expressions that were at risk so they could cast them to a type for which the libraries worked reliably.

I discovered a useful combination of compiler flags and wrote an awk script to take the compiler output and make a list of source files and lines that produced calls to the broken lib function. The lead developer insisted that I was wasting their time with a bogus list, until I explained how it worked.

More challenging: I worked on an industrial machine that had to mix measured amounts of gasses over time. The developer who wrote the mass flow controller (the device that controls gas flows, basically) task just opened the MFC at the start time, then slammed it shut after the correct amount of gas had passed. I coded up a smooth open/close that kept the area under the curve correct. In 6809 assembly language. That was in the early 80s, when a 2MHz 8-bit CPU was some serious horsepower.


One time I reverse engineered the flight controls for a octocopter (like a quadcopter, but 8 rotors) so I could access the debug port and stream telemetry to a ground station while issuing GPS waypoints.

Probably the only thing worse than reading C++ is when it's poorly documented and in German. :P


I feel for you. I have to work with a proprietary library that is only documented in German - google translate only gets you so far.


It's much worse after it's compiled. :-)


This was when I worked in a horrible Ad network with questionable ethical standards. One of the projects I worked on - I reversed engineered the Android Market application to figure out the protocol used by it to get the email addresses of every single application developer on the Android market. It involved finding the fields of the Protocol Buffer RPC format used by all Google applications from the decompiled code. It took quite a while but I did it.

This was made done so that the Ad network could cold call them for ad deals. I left immediately after that because of dropping levels of decency and ethical standards in that company.


Yeah my coworker was asked to mine financial newsgroups for emails, to sell to an outside contract (advertiser). We were supposed to be selling our own customers' email addresses (sketchy enough) but payroll was going to be a little bit short so anyway. He did it, then quit. I was already gone. The CEO had been arrested on stock fraud charges by the FBI shortly after I left. Things recovered; the company thrived and go bought out successively by larger companies until I think NASDAQ finally owned them.


Choose one of my adventures:

- Deploying a financial app through 4 bastion hosts by keeping Russian doll ssh tunnels up (clients outsource IT bouncing across the world to get to the right boxes)

- porting a 8 MLoC fortran nuclear reactor simulator from UNIX to windows

- generating PowerShell on Linux to be run on a windows box by reverse engineering the MS Api.

- silencing dialog boxes by DLLs which patch and proxy system DLLs

- making Java JRE run from a CD-ROM with the right JNI dll/so and a custom installer I wrote, before the advent of installanywhere (talking Java 1.1 days)


>- generating PowerShell on Linux to be run on a windows box by reverse engineering the MS Api.

Sorry but my god, WHY? was it morbid curiosity, because I can see why you would like to do it then or do you actually prefer the power shell syntax? (sorry if this comes off as biased, and it may very well be but I haven't actually heard of many people that can stand powershell)


I'm definitely a *nix guy (run Linux and OS X, only, for the last 5 years or so) nowadays, but I've had to use PowerShell to do some admin stuff at work... and it's not that bad. Sure beats the crap out of Batch files anyway, though it's IMO not as good at gluing disperate processes together as Sh is. That however seemed more like a function of Windows' apps themselves more than PowerShell, though.


Most of my coworkers (sys admin-types) adore Powershell.

It's ugly in its own unique way, but it's better than bash in terms of functionality and elegance.


> better than bash in terms of functionality and elegance.

How so? I haven't used it much, but I've not noticed any elegance to it? is it like lisp and if I use it for a while it will hit me what the power of it is?


Much of the functionality stems from being able to use .NET objects in the scripts, and that the result of commands are objects rather than text. For example you can pass the output of ls to a foreach loop that outputs either the absolute or relative path of each file, or various other file properties like size or modification date.

You do have to be careful in some cases because UNIX style syntax is supported but the result may be different because of the underlying implementation. Like if you output text to a file using redirection, by default the output is UTF-16. It can make you crazy if you're not aware of things like that.


I took horrible basically no documentation globally scoped spaghetti code in perl, with a proprietary db based on flat files and forking web scraper from the 90s, which used n ancient form of ipc that predated posix and mostly didn't run, and rewrote, debugged, unit tested and updated it so it runs, faster and better than before. I built a new front end using dancer (perl framework kinda like Sinatra) with a nginx reverse proxy to replace the cgi scripts originally used.


1) Most challenging? Cookiepie (2006 in Javascript): opening different web accounts on different Firefox tabs: http://www.youtube.com/watch?v=2Pfg-kJ4nAw (It's not working anymore and I am not supporting it).

Why? Because Firefox didn't have any API to correlate cookies (network requests/responses) with tabs and I ended up traversing recursively objects to discover extremely indirect relationships.

2) Most weird?

i) Vulnerability and exploit using linux terminal escape codes (1999 in C/ASM), so you can run the exploit even if the user just "cat" your file or a ftp server displayed a specific banner: http://www.shmoo.com/mail/bugtraq/sep99/msg00145.html

ii) Adding a second keyboard to a Commodore Amiga 500 (1989 in ASM) (from a Commodore 64) enabling two people to use the computer at the same time in different screen places with one monitor.

iii) Adding an API to Microsoft Outlook Express (2003/2004 in C++) (the application doesn't offer a whole API architecture): http://www.nektra.com/products/oeapi-windows-mail-outlook-ex...

iv) Doing a file compression tool in Cobol (circa 1997) for the MVS operating system because the organization didn't want to buy a relatively expensive C package for the mainframe.


I once worked on an embedded system with a Microchip PIC. As if working on the massively underpowered CPU wasn't difficult enough, the compiler (C18) was ridden with quirks and the c libraries that came with it were full of bugs. To make matters worse the debugger displayed wrong values for variables every now and then, so I couldn't trust the debugger either... I ended up simulating the entire application on a PC, so I had some decent debugging capabilities.


I think my dad could corroborate that. He was very deep into PIC programming about 10-15 years ago and ran into similar issues, enough that he decided to just write everything for them in assembler (mostly POCSAG/pager related stuff). He then created one of the first full visual PIC emulators to help him debug code, although it was then quickly surpassed by things like PICsim. I think he would also place all of this under some of the most extreme tech problem solving he did before retiring.


I've had the same experiences with the C18 compiler. Using that compiler, building a simple 6-channel PWM application becomes quite difficult. For every interrupt, the compiler would copy the entire (call)-stack to a different memory ___location, then enter the interrupt routine and finally copy the entire stack back, resulting in latencies of ±100 cycles before and after the interrupt. So, to get rid of the timing jitter, we had to fire the timers early, and then wait for the final timer counts in the ISR.

Later, we switched to the NXP LPC platform using only open source tools: GCC ARM Embedded toolchain, OpenOCD + gdb for debugging and vim, make as 'IDE'. What a relief.


Funny that you mention that, I am currently in the process of designing the "next" generation of the project in question. Luckily I was able to convince the product owner to switch from PIC18 to NXP ARM chips.

I have loads of experience with the LPC series and I only realized how nice they are to work with until this project came by. Also, GCC + OpenOCD + gdb is a very nice toolchain to work with, although the first versions of OpenOCD were a bit of a pain to get (and keep!) running.


One of the side-projects I did when I was fairly new to development -- but which forced me to make a big leap in my skills -- was to crack a database access tool I used, when they crippled the freeware version.

The tool was written in Java, and I was coding in mostly Java, so I figured it'd be easy. That first version was indeed pretty simple to patch (decompile, find a class with a method that ran a bunch of license checks, and just replace that bit with "return true" then rebuild the JAR). Then they released another version with more serious obfuscation applied to it. I found myself digging through classfile specs, writing my own code to edit classes directly (where decompilers failed), etc.. As obfuscators evolved (and they upgraded), it got more interesting.

I bought a license after a while (and I certainly didn't distribute my cracked versions; I liked the tool and wanted to support them), but I kept cracking new versions when they came out for a few years; it was entertaining & highly educational.

I worked for a while on a standalone classfile editor, but eventually ran out of time for the hobby. But along the way I learned a good amount about Java bytecode, how the VM interpreted it (and how much flexibility there actually was involved). I learned about UTF-8 (and realized quickly that most text editors didn't seem to actually support it properly), put quite a lot of thought into security (trickier than I had thought) and how it relates to program flow (e.g., good design can easily mean more-easily-bypassed security).

The real downside is that I couldn't brag about it to my colleagues, or online -- I didn't want to tangle with any potential legal issues, or reputation problems, especially at the start of my career.

So I'd be walking around, buzzing with the new stuff I'd figured out (and new obstacles surmounted), but unable to say a word about it to anyone who would know what I was talking about. I considered contacting the developers of the tool directly, just to let them know this little contest was even happening and offer suggestions, but decided not to run even that risk.


You might be interested in crackmes, little programs designed to test your reverse engineering. When you beat them you get full bragging rights :)


Back in the day I used to read +fravias webpages, and had hours of fun working through some of the challenges posted.


For my college project (2003?) I modded UT:GOTY and its map creator tool (UnrealEd) to work as an architects design tool when pretty much everyone was using CAD tools.

I was able to provide to an architect a tool that was very easy to use (UnrealEd was pretty cool), to create structures and allowed real time 3D visits of them, modify the textures and modify the sounds.

Funny thing, I didn't have the time to remove/hide a certain feature of the game: dying. So if a visitor of a virtual building dropped from more than a meter, the game would play the hurting animation or die if it wasn't in god mode.


When I was in high-school, just out of interest, I implemented a bunch of AI algorithms (Neural Networks, Genetic Algorithms, Self Organizing Maps, Cellular Automata and (most fun) Genetic Programming): http://paraschopra.com/sourcecode/

Genetic Programming part was most fun because it required me to use Python language in a way I didn't use before (passing functions as variables and making trees of different Python functions).


Wrote a program to record Hulu.com. They had every type of protection against recording, and changed code every week.


More proof that DRM is a farce. :-)

I used to work for a massive media company for a short stent and I kept trying to explain to them why DRM will always be broken but it just never took. I think it's hard for non-technical people to think about the 100+ ways media leaks on its way to the human's ears or eyes. :)


Did you run flash under a debugger and dump frame memory? CreateRemoteThread? Kernel extension? Capture in the compressed ___domain?


A screen recorder? Haha.


The hardest ones haven't necessariy been hard technically, but in getting all the right people talking to each other. I'm not a networks person, but I've had two cases where the networking people gave up and it was left to me. Most recently it was a Kerberos callback failing across a WAN - it was being blocked by a sister company's firewall. On the upside, we are now looking to sell that company.


I discovered that our company's old ASP code has not a memory leak, but a connection leak, and after 2 days of experimenting, I managed to get our sites to load in under 1.2 seconds by recycling app pools every 15 minutes.

I taught myself how DFS works (distributed file share - an extension of AD technology, it maps a set of shortcuts on ___domain controllers and other DFS root holders), and expanded the available DFS roots to lead to a reduction in site load times.

I hooked up an ultrasonic distance sensor and a motor shield to an arduino, and put the assembly on an RC car. I programmed it to slow down as it approaches walls, and to back up if something is too close. I want to build robots to fill some need, but I'm still looking for that need.


I've passed a lot of my first 10 professional years fixing bugs for a information security company. One of them was pretty hard to find out because it was inside the boot of a specific IBM machine in a specific version of a HD encryption bootstrap. The sympton was the machine could be turned on normally, but in the second boot we got just the black screen after the BIOS check forever. Oddly enough, in the third boot things got just right, freezed again in fourth boot and so on.

The problem with boot code is that the debug process cannot interfere with the boot memory itself or the bug won't show up. So I got to make one simple change in the assembly bootstrap, assemble the code, boot with a floppy, access a raw disk editor and replace the old bytes for the new test, boot twice and see what happens (mostly some chars printed on screen).

After one week focused the problem decided to show up. It was a micro misplaced bit operation in the blowfish algorithm (our version in assembly) that swap one or two bytes in a specific sector in the disk. This happened just in this machine with this HD and this version of O.S. because in the boot process some code in the O.S. bootstrap made a write operation in the disk just in this "wrong sector". It was not hard to fix this problem, but to find it for the first place, and after this system already running in thousands of different machines for a couple years without a clue.

It was a pretty nice week in the end.


While working on IPTV product for Geodesic, we came across this peculiar problem our video data was shifted by 3 bytes from what we expected, throwing our decoder into fits. It took us three months of debugging, and delay of product launch , to fix the problem. A renegade "&" in C++ code, instead of value of pointer we were accessing its address . Why it caused only 3 bytes is beyond me, I was just very very happy to have fixed the issue. ( It was back in 2009 so some details are little hazy )


Not sure if this is the sort of thing you mean, since it was only a fault-finding thing, but the constraints were _very_ tight...

A long time ago I was Mister Fix-it for a company that made top-of-the-line lighting desks for touring rock bands. If your band wasn't touring with one of their desks you hadn't yet made it.

The company developed a new desk and sold the first couple. Then one band said they had an intermittent problem during rehearsals where the desk would just stop making the lights go. I got sent to fix it. Couldn't find anything, and couldn't make it break.

First show of the tour, 15,000 people in the audience, the house lights have gone down, the intro tape is running and the guy on the desk announces that it's stopped working! I ran backstage to get my tools to open the desk (god knows why I'd left them there), pelted back to the desk, and on the way I realised what must be going wrong. It must have been the adrenalin that heightened the thought processes. Opened the desk, moved the component that I'd worked out was shorting to ground and causing the output to stop, and he was back in business before the intro tape had finished.

I still get a bit of a high when I think about the total head rush of fixing that problem like that.

The 'permanent' fix for him was a tiny piece of gaffer tape under the component. The permanent fix was to use the component the PCB was designed for rather than one someone had to hand with a different footprint.


Without real experience in Javascript nor the framework, I created a Google Hangout face tracking app in five days. You can still try it here: http://cocacola-hangout.appspot.com/


One i remember clearly was on my first programmer job in 2000. The other coders were poking at PHP but the company had several customers on WebCrossing [1], which at the time was considered one of the "serious" website building tools.

I had to customize WebCrossing to the needs of Svenska Spel [2], the swedish lottery company. They had about 200 pages of drawn screenshots how they wanted _EVERY_ detail to look.

The problem was WebCrossing did not offer much of customization built in, and i had to basically override every single template. Especially i had to create whole new forum code, because the WebCrossing way of presenting stuff was not how the customer wanted it.

Also, for some reason, aftonbladet.se, Swedens then (and still?) biggest news site had got 2 guys from WebCrossing USA office to come over and build their site.

Since i did the first gig, i also ended up rewriting the publishing system to aftonbladet.se:s forum from a "pre-moderation" system, into a self-moderating (post-moderation) system. The reason for this was beause the site managed to approve some nazi posts, and the publisher got fined [3].

After all this horror i could finally start learning PHP instead.

1: https://en.wikipedia.org/wiki/Webcrossing 2: https://web.archive.org/web/20010615000000*/http://chat.sven... 3: https://www.aftonbladet.se/nyheter/article10198408.ab


Back in 1999, we had an idea to provide a mobile news service at a technology trade show/conference. I was a tech magazine editor at the time. The idea was to give journalists laptops on which they could write short items (about 100 words on the fly while they were actually in conference sessions - if Bill Gates said something interesting at the Keynote we could have it published in 3 minutes. The journalist would type the story into a structured Web form, hit send and the story would go to an editor for proof reading, who would then publish. The item would appear not only on the Web, but would be formatted for Palm Pilots: We had a bespoke app people could install on their handhelds and sync-stations around the show floor for them to plug them in and get news stories squirted into their machines when they wanted.

In the pre-Twitter, 3G world, where mobile Internet meant WEP this was a quite advanced. The main technical problem involved connectivity for the journalists. We borrowed cellular dongles and laptops, but there was simply no guarantee that the cellular network would be available at the show and wouldn't be overwhelmed by the number of attendees. Cellular data plans (we were roaming) were also runinously expensive and had to be minimised.

We wanted the journalists to be able to write their stories directly into a structured Web form, but we didn't want them to have to cope with the flaky cellular network and the story disapearing with a 'no network' error on submit.

In the end I came up with the idea of simply running Web server on each lap top, to present the for, handle the input and batch it up ready for sending as and when the network was available.

It worked rather well and we were all rather chuffed.


A raid controller cleared the mbr on a windows NT ___domain controller, the backup did not work so I had to use Norton Disk Editor to write a new mbr, took some time since i had to scan the disc for star/end of partitions.


Being still a student and web developer in my free time (it's more as a hobby that keeps growing) I always find it very exciting when I'm trying to solve a problem I'm having, even if it just means learning something new because that new language/tools would make solving the problem easier.

For instance, a while back I've been helping a friend to automate filling in a form, captcha and send a request on a site which required to do it daily. The way we did it eventually was using nodejs, phantomjs as a module (node-phantom), deathbycaptcha for solving the captcha and than running it daily using a crontab. This may be a very simple task to solve, but instead of solving it in languages we knew well (like php using curl requests), we used a new language and tried to use the most easy way to do it.

Also I've been switching to managing my servers using ansible instead of shell scripts. I'll pickup cheff or puppet too some day.

I guess what I'm really trying to say is, I haven't had any challenging or really difficult problems to solve yet. Thinking about whenever that 2h I would spend writing boilerplate code can be reduced or automated, or avoided by switching technologies/frameworks. Learning about new tools and their advantages and disadvantages. Like after a good year and half of using vim talking a look at emacs. That is what to me is difficult and challenging, yet very rewarding in the end.

I know this is not exactly what the question was, my apologies for that.


Developing a warehouse management system for one of the biggest companies in the textile industry, world-wide: - project kick-off with a team of 10 junior developers - working under pressure for 3 years in a row - avoiding office politics - without support from IT staff, my bosses, the client or any other one except our development team - integrating factory hardware with our own multithreaded applications

The most important lesson I've learned: for software and tech development, people comes first. Always.


* A Vector pseudo-3D Engine from scratch written in Turbo C++ when I was 15. I didn't know about OpenGL, so it was written using the wonderful <graphics.h> packed with turboc. I used it to create a tank wars game, with explosions and fully destructible level, level editor, animated 3D menu, and of course tanks :)

* An algorithm to generate a "spine-spline" for an arbitrary 3D closed loop (simplest form is a torus): Essentially a path that passes through the volumetric centroid of the 3D object and closes with itself (for my upcoming game, to generate the pathway for a level just by uploading the 3D model for the level)

* Porting bitgym.com's vision processing algorithm to work on Android devices: Getting the camera frame pixel data in realtime, downsampling the image, integrating with a C++ vision processing algorithm through JNI, and making it actually finish a round-trip in under 50ms, and work on hundreds of different Android (versions and) devices. Actually considering there are still occasional crashes and bugs with BitGym on Android, I guess this is an ongoing challenge, still being solved!

* Countering the drift and rotational aberration in a high speed stepper motor that was trying to point a laser pointer at multiple precise coordinates on a flat wall, in quick succession (for college robotics class).


Tracked down a bug in the inhouse foundation library that turned out to be a gcc feature (bug). We were convinced that the code was schizophrenic, since the debugger would never break inside the function, but we could step through code that would run around it. Turns out gcc doesn't issue a warning/error if you have two destructors for the same class (it will call the first one in link order).

Solution: rename the second class, and create a style guide rule for class names.


Interesting. Looks like everyone here have been successful solving their most difficult tech challenges. Wow. Taking it easy ;)

I'll tell you a story about one of mine. When I was around 13, I've tried to develop a smallish application that could keep a small local DB of medical results for some medical equipment. It should have had a pretty-looking UI, and the fashionable technology at the time was Turbo Pascal/Turbo Vision. At the time I've already been an Ok hacker (at a level of being able to write an antivirus to an unknown virus), hacking hardware since I could remember myself and writing in C since eight or something. You get the gist.

Anyway, I had about a month to write that stupid DB application. Including learning Turbo Pascal, OOP, damn virtual functions tables, Turbo Vision. With no internet, no prior knowledge of Turbo Pascal, no examples, no manuals, and an idiotic Turbo Vision book that had contained broken code. I think it was my toughest project ever so far. And I've failed it too ;)

On a serious note, I think that inventing useful and fundamentally new algorithms is it. Easily takes half a year of time keeping it in the back of your mind.


I have two. One was getting WCF to talk to a SOAP web service implemented in Java that required the WS-I Basic Profile 1.1 Password Digest, which is not provided in Windows Communication Foundation. It was for an integration with a third party, and I was blind to their end. There were lots of quirks to get around, but in the end it worked.

The second was a weird one. A customer had a web application that was showing odd intermittent failures, but only on HTTP POST. The system was an extranet application so I could speak in detail to the users having these problems. It appeared that only certain offices were having the problem, but not all users. I had to prove that it wasn't a problem in the application. I had to prove it wasn't something in IIS. I had to prove it wasn't something specific to a client. It turned out to be a misconfigured load balancer, where the MTU size was incorrectly set. The HTTP POST errors only was the clue. Nearly all browsers send HTTP POSTs in two packets or more, even if they fit in one. GETs always go in one. When the penny dropped after weeks of pain it was extremely satisfying to see the problem solved.


I once had to fix a system that was dual boot NT and Solaris x86 after Solaris wrapped around and put log messages into the boot sector. I fixed it by typing in the boot code for NT from a screenshot of a hexdump in the NT resource kit using a sector editor. Then it was just a matter of finding the partition start offsets and making a partition table then the machine was fine.


I was tasked to create an AI for a simulation war game similar to Command & Conquer (but way more realistic).

At the highest level the engine is based on Hierarchical Task Networks (HTN) (http://en.wikipedia.org/wiki/Hierarchical_task_network). Primitive actions like Move and Fire can be compounded into higher level commands like Sustain Cover Fire or Setup Ambush. These are recursively aggregated into even higher level directives like Attack Enemy Position. All commands have pre- and post-conditions, and the AI engine stitches together a sequence of commands under the current constraints (eg using Charge Hill instead of Charge Building when Attack Enemy is specified on a knoll instead of a building, you need to Setup Ambush on an incoming road if expecting enemy reinforcement).

At the lowest level, I had to implement detailed algorithms for pathfinding (easy, but consider different terrain, on-road, etc), line-of-sight (actually pretty hard due to time constraints and size of map) for locating best vantage points, etc. Implementing fuzzy conditions are tricky, eg the idea of 'threat' where a unit knows it is outnumbered by a superior force (when is an enemy unit relevant? based on distance? what if it's currently engaged against an allied unit? Can you get threat from one direction but not the other? what if the enemy can't see you?)

The hardest part about this was actually getting the ___domain experts to write the heuristics using my script editor. These army guys don't think like programmers at all! Even getting them to encode their decision process using simple If-Then-Else was an insurmountable challenge in mental gymnastics. :(

Still it was fun. As it turns out, it was used to train actual army commanders in a small foreign country. :)


i've been doing ee/cs work for 40 years. i've lost count.

dead programmer/reverse engineering projects are in there, but not as common these days.

i did a bunch of rather amazing things in the rocket business for martin-marietta, raytheon, applied research associates.

once, i walked up to an employee who had been fighting a problem for two weeks, asked him what he was having an issue with, determined it was a variable initialization error, looked at his monitor, pointed to the missing # sign on an 8051 assembler listing and said, "it's right here". walked away... 60 seconds total. win.

another time... three engineers working on re-creating a poorly documented test station for a guidance section on a missile. again... two or three weeks spent trying to wake up an interface board based on a 68020 micro. they worked for me, and had sent an engineer to ask me what i'd do. i was busy, but took two pieces of test equipment down to the unit (a logic analyzer and a processor troubleshooter (Fluke 9010)), made them show me their failure. looked at what the processor was doing (nothing... just executing 00 op codes). inspected the memory to see where the real code lived. found it, but it was at the wrong address. burned an EPROM set with the code moved from 08000h to 000000h and installed them. problem solved. start to finish 2 hours. win. went back to my management stuff, which was making charts explaining schedule slips and cost overruns. these guys were hot shots. i smoked em. woot.

fixed a consumer product that was a serious design error. solved problem. product went on to sell several hundred million dollars of units, making the owner a very rich man and handing him 85% of his market. huge win.

i have at least one or two of these a year (though the last one is a bit more unusual because of the volumes involved.)


It would normally be almost boring, but the timing made it very interesting...

We had a cloud deployment system that was also responsible for configuring Nagios instances. The two processes (Java and Python) that were responsible for the Nagios configuration communicated over XMPP. The Java process could only be restarted once a week because we guaranteed a certain uptime to our customers.

One day I decided to refactor the way it worked, and this entailed a change to the api. I began with the Java code, but when I got to the Python parts, more important work came up so I left my code in a branch and forgot all about it. When the next release came up a coworker started to merge all the new features into the new release branch. He asked if he could merge my stuff, and as I had just finished some new features in another branch I said yes. Two days later the new half-refactored, untested and very scary Java code was live in production without my knowledge.

An hour after the release the first alerts of nagios syncing problems came in. We scratched our heads and soon discovered that we had merged too much. Now we had to choose: roll back or roll forward. To roll back we had to take down the application, restore a backup, synchronize a lot of other connected applications. After checking whether the Java code that was in production now actually worked and spit out sensible messages (it did, surprisingly) I decided to try and create the rest of the Python implementation right away.

So with my headphones on and a big NO sign on the door I started programming and created what would normally take one or two days in just over an hour, including the release to production. The good news was that we had had no customer downtime of the Java process and now had a much cleaner Nagios configuration system, the only bad thing was that the alerting system had been down for about 2 hours.

The situation was pretty fucked up ;)


This isn't as impressive as the rest of the problems I've read about but I'm only coming up on year 3 of living in this crazy world of coding so I'm pretty proud of it considering my experience level.

Two years ago the startup I work at had an unofficial doorman working the building we occupied in the financial district of SF. A homeless Vietnam Vet who slept on the concrete above a steam pipe for warmth. This gentleman was almost always there when I left work for the evening, sitting on his milk crate or standing around either engaged in deep conversation with someone(s) about whatever or spouting err.. let's just say cat calls to women walking by. I'd always talk to this guy after work as his stories were always entertaining, funny and typically heartfelt.

Last summer he received a Sony Ericsson from a catholic priest in the tenderloin and was placed on her family plan. This was the W580 which, in its hay day, took amazing photographs. I don't remember how it was brought up but at some point I mentioned that he could share these pictures via the web for his adoring fans (of whom he had many) and his eyes lit up. "I'd love that, B!" he told me "I just don't want to be a part of twitter or facebook or anything like that." To which I responded that I could build a site for him.

I was winding down an entire rebuild of the UI for the company I work at. I was responsible for building aspects of everything from django db models to css (SCSS really) and at home that past year I had learned how to get a basic django site hosted on my very own virtual instance. If this guy had a smart phone the job would have been as simple as building a responsive UI and a basic django backend.

The challenge then became hooking now deprecated tech to the web. After considering the problem only one solution came to mind: e-mail. I then went to work attempting and failing to scrape gmail and wound up just installing sendmail on the same server I was serving his django instance from. After a number of late nights and a slew of cursing I got my first end-to-end sendmail to django integration setup and from them created email addresses which acted as api endpoints. I parsed the senders lists to ensure only my or my friends phone numbers had permission to post to the site. I then went to work figuring out how to read, save and resize images emailed to this sendmail server.

By this time my friends small Ericsson had seen better days. The man's hands were so big that his thumb usually pressed 4-5 keys at once and I would typically see him attempting to bang out text messages or dial phone numbers with his pinky. This is understandably frustrating AND he was working on actually cleaning up his drug and alcohol habits at the time so his phone would meet with the wall every so often. Typically I could repair it but eventually it was damaged beyond repair.

From there he ended up getting a cheap go phone with no camera and he was kinda bummed but I recommended a pivot to him. Audio posts powered by twilio. He liked the idea so I went about consuming the twilio api and was able to get a proof of concept working fairly easily. I had also unlocked the secrets of SSL, in order to give users a more secure login experience. I finally got one post from him, just saying hi, but from my cell phone. At this point I gave him a card with a number to call and access code to punch in, but he has asserted that he doesn't want audio only posts, he wants audio and image.

So, I now have a sever for him running django and sendmail which work in concert only for registration (your email address needs to be white listed if you want to register and the only way to do that is to have me or my friend add your email address to the body of a text message we text to a white list email endpoint).

He has also since found out that his liver is essentially fucked and that he has terminal cancer which puts the idea of a website for him on the back burner for both of us.

So I dunno, this was essentially a CRUD type problem, and while the technical challenges weren't as difficult as many of the ones I've read here, I think the creative challenge of giving a semi-tech literate homeless man a means to operate his own website with a decent amount of autonomy was a worthy and fulfilling one. I haven't been in this industry terribly long, but the possibilities I see out there are damn exciting. I made no money from this work but learned a lot I didn't already know and achieved something I feel like most people wouldn't even attempt. If you feel like you're stuck in CRUD land, why not attempt to mix up your customer base a bit? Think of something you could do that will be challenging and fun enough that you could perhaps open yourself up to an entire arena of unpaid work (I know, blasphemy!). It might not sound glamorous.. and well.. it really isn't, but I've found that the connections I can make with people using skills learned in this industry is what really drives me.

So fuck it man, get CRUDDY and build some cool-ass shit for people who aren't looking at some rigid business model. You might find yourself seeing CRUD work in a whole new light.


"Last summer he received a Sony Ericsson from a catholic priest in the tenderloin and was placed on her family plan."

I'm really having a hard time parsing this sentence... what's a tenderloin? Who is 'her'?


I'm in SF, the tenderloin is a neighborhood here.

The priest is the her in this sentence.


The Catholic Church has female priests now? When did that happen?


Not sure, as I'm an atheist and none of my theist relatives are catholic.

But I know this is a catholic church and she is a (or maybe the?) priest there. It's also possible that I could be getting my terminology wrong but she is definitely not a nun. This place is right in the middle of the TL and I believe functions mostly as an aid for the homeless of San Francisco, they close their doors some time in the afternoon. So yeah, I don't know enough about the catholic church to know what is or is not allowed I just know what I see and am told.


Fair enough. Maybe a lay role or something. I'm not that up on Catholicism either.


You've done something really awesome (and inspiring) here.


hah thanks. I've still technically never finished it, but maybe something will happen in the months to come, we'll see I guess.

Thanks again for the kind words, they made me smile :)


One of my previous companies was producing a Digital TV set-top box based on a Via mini-ITX board. When using component output, the overscan on the image was way too big and we were losing a significant portion of the TV picture outside the displayable area. The next best video mode put large black borders on all four sides of the image.

I needed a video mode that was a better fit for the screen, so I derived a set of parameters to add a new video mode to the Linux drivers for the VT1625 Via video card.

I did this by writing a small C program which could inspect the video card registers on a Win98 machine that was using the official driver. Once I had a set of register values, I used online documentation to transform them into the initialisation values used in the video driver.

The video mode was submitted back to the openchrome project, and as far as I know it's still there. It took me about 5 days of prodding, and another day of trial&error.


At work I was server admin because I was the one that had tried to install Linux before. This is probably more common than it should be.

There was one development server and at some point we added a NAS. Due to the access restrictions and my inexperience with other stuff, the server would mount the NAS and reshare that through Samba.

On Windows machines every folder would show up as a file. You could still manually enter the path and get a directory listing, but you could of course not navigate the directory structure in any satisfying way.

It took me a few weeks to figure out the error that came from a error in the NFS file system. Figured out the fix and patched the kernel.

Felt nice when I was done.

I've put a link to the serverfault thread about the issue.

http://serverfault.com/questions/491464/directories-shown-as...


I'm in the process of refactoring LibreOffice's VCL code.


You're doing a good thing for this world. Never forget that. You are important!


Thanks!


It's nowhere near as fascinating as some other stories here. But earlier this year I and a friend created an Android app for our school schedules. Initially we thought it would be easy; "let's ask the schools devs for an API or something".

Turns out it's a basically a black box they've bought from an obscure company.

We then needed to parse the HTML tables by hand, which should be relatively straight-forward for simple tables. But then it turns out the tables are filled with rowspans, colspans and nested tables. It took me quite a while to create a decent algorithm, as I had never done anything similar before. But now we can parse all the HTML tables on that website and cache them in the app.

As I said, nothing impressive or fascinating, but I was still quite proud of my algorithm.


I have done something similar to transform legacy pre-2000-HTML pages to a mobile-friendly version via XSLT.

I had multiline XPath-expressions and comments like "grey box on the top-right of product overview pages containing links like X, Y and Z" and the XPath expression was looking for things like a table with grey background that is an descendant of the table that comes after the comment "<!-- CONTENT STARTS HERE -->" and that is following-sibling of a form with name bla.


I implemented this feature in an asymmetric cryptography library used for securing IP data that allows you to basically ping another endpoint, sending it a chunk of data along with the size of that data, and confirm the other endpoint sends that data back to you correctly.


So you are behind Heartbleed?


This isn't an objectively difficult one, but it's certainly one with a fun payoff.

I hacked together a system of file system listeners and named pipes so that I can inject code into a sandboxed app on OSX and have it communicate with the non-sandboxed process that put it there.


Probably not the most difficult but certainly the among the weirder ones I've ever done.

Coworker was leading a project that had to get a Qt/QML software to show an accelerated and responsive display in at most 3.5s from power-on. On his request I was pulled in to solve the hardest part of that pipeline.

All the simple solutions had been done already: boot loader timings were down to nothing; the NAND flash driver was optimised for reading with UBIFS tweaks; we loaded kernel modules in sequences of parallel insmods so the wallclock time was kept at minimum; and so on...

A year or two earlier we had discovered an Intel engineer's presentation about optimising Qt binary load times. The two main tricks were to build a completely static binary, and to reorder the binary symbols in the order they were read. Through unofficial channels I found out that Intel used a customised linker to generate their symbol list orders in fully automated fashion. I did not have enough time for that luxury, and as a bonus I had to deal with proprietary 3D drivers and their respective kernel modules. Fully static linking was not an option.

I ended up using OBS in a very weird way. I had three diffrerent Qt builds. One was explicitly compiled for dynamic builds without ANY optimisations, all inlines disabled, and with function call tracer enabled. The other had all the regular optimisations and was built statically. to be used in production. The static Qt was patched just enough to allow loading dynamically built 3D driver modules. This was necessary for the next step...

The software was built against the dynamic, deoptimised Qt. It was then run on the target device and the function call trace log was saved. (GCC's function call tracer only worked with dynamic libraries.) The log was saved and extracted. I then had set of tools; one to extract the order the Qt symbols were accessed. Another to extract the symbols from the Qt libraries. And a third one produced a linker script to force the library symbols to be linked in a static binary in the order they were read.

These tools together produced files for the final Qt build: a static, heavily optimised Qt with the ability to load 3D driver modules at runtime to avoid (L)GPL mousetraps. This Qt static build contained only the symbols that were needed to build the final binary. This way, when building the optimised binary, we knew that there were no unnecessary symbols in the product binary - and with the linker script we had forced them so that the read from the NAND flash could be a single linear sweep.

The reason to do this in three distinct phases was to ensure that the client could reproduce the steps in their own OBS instance. If and when they updated their QML code to use new functionality, they could then regenerate the intermediate files and the final binaries. Once documented and tested, the full cycle took about 10h total, thanks to the need to rebuild the intermediate Qt.

These tricks got the final boot time to about 3.9s. The coworker who had requested my assistance came up with the final trick: patch Qt a bit more to allow loading graphics in pre-processed form, so that they would not need to go through all the intermediate parsing steps. We did a few profiling runs and discovered that processing the PNG files, even when built into the binary as static assets, took nearly 700ms in total to transform them into regular Qt pixbufs.

It wasn't the hardest project I've been in, but it pulled off a lot of low-level magic to get around system and platform limitations. And it was certainly satisfying once the problems were sorted out. :)


Interesting -- was this for a digital camera or something similar? It seems that many developers just use sleep/wake functionality when extremely fast bootup (or the appearance thereof) is necessary.

I've got a FLIR infrared camera that runs Windows CE, which takes a lot longer than 3-4 seconds to cold-boot. The camera initially goes to sleep when you power it off, then after a day or so it shuts down entirely to save its batteries.

One nice thing they did was to include enough intelligence in the low-level ASIC to drive the display with an unprocessed, uncalibrated live image while the user is waiting for the "real" camera application to come up. Much better than staring at a blank screen for 20-30 seconds.


No, not a camera. An IVI system.

That's all I can say about it for now. (Although I did learn to dislike 3D drivers in that project. With a passion. Can't get contiguous kernel memory? Oh, I'll just wait here and block until I get what I want. What, you wanted to do something ELSE while I probe the hardware and spend 1.15s initialising myself? Tough luck, go sit in a corner and cry.)


Got a spec for a processor chip plus a survey form from Intel rep asking what else we wanted in it (80386?) - we were their largest single processor chip consumer at that point (Convergent Tech). Scribbled down "realtime counter register for code analysis etc; bus-condition-match registers that trigger an NMI so I can throw away my Blue Box; bus-independent move instruction that optimizes for whatever bus/cache topology is present" and some other stuff. They only put in the first two, and I really wanted a constant-clock timer register not a cpu-clock register, and I really, really wanted it to be in user mode not kernel mode! Anyway they're still there to this day in Intel processor chips.


In a product that effectively boils down to CRUD in many places, the most time consuming thing for me so far has been one of two things:

  * Complex reporting requirements with an extremely vague specification on a completely normalised database. The sheer quantity of various joins was fun. There must've been a better way
  * Optimising certain read operations which operated over a potentially large dataset.
  * Providing an efficient full-text search based around lucene which had some pretty interesting requirements, that came down to lucene filters in the end.
Overall, it's mostly been about optimisation. And that is simply measure, measure, measure. Doing things blindly gets you no-where.


Worked on a .NET app with lets of C# and C++ interop and we were seeing what looked like a rare race condition. The bug was hard to repro and never seemed to happen in our dev environment (like all great bugs!). The native code would just trigger access violations and we couldn't figure out what triggered it. We ended up getting extremely lucky and getting to a point where the bug started happening on my dev machine while I was running the app under a specific profiling tool.

It turned out we were racing the .NET garbage collector when passing a reference off to some native code. The variable being used by the native code would be collected by .NET as it determined that we no longer needed the variable when we were still actually using it. A classic .NET interop mistake that was fixed by wrapping the use in a 'using' clause. The reason we didn't see it while developing was that .NET garbage collection is different between Debug and Release builds (Debug builds extend the lifetimes of all variables so that they live for the entire length of the function they are used in; Release builds optimize for the shortest variable lifetime).

Another time, had a crash that seemed to occur after a consistent time period. The crash was in third-party code that we didn't have the source for. It 'kind of' behaved like a memory leak, except that everything was telling us the app's memory usage was normal.

After several weeks almost tearing my hair out over it, I ended up setting a breakpoint in the third-party lib using WinDbg and using PageHeap on the app while it was working in order to see who owned the memory that the third-party library was trying to write to. That lead us to a line of code that was performing a malloc() without checking the result; then sending that off to the third party library for use later. The malloc failed, neither us nor the third-party library checked for the NULL and the third-party lib wouldn't try to use the memory until later.

We eventually determined that we were seeing heap fragmentation causing the malloc to fail; we found that under certain conditions we'd leak small objects. Again WinDbg and other tools came in handy; they DID report high address space fragmentation, but it took us a long time to put the two things together. WHen we fixed the small object leak, the crash in the third-party library went away.

So classic bugs in hindsight, but they certainly taught us how awesome WinDbg can be (along with the other excellent debugging facilities provided by Windows). In the end, well worth the time investment in learning.


No story here, I'm just a lowly hacker whose most difficult challenges were because of my ignorance, I just want to say thanks for the thread, it's only few hours old and it's already pretty damn fascinating!


Fixed the 802.11 stack in some crusty old Palm handheld that a vendor had 1000's of in a warehouse - so they could sell them in China at < $200 for warehouse use scanning etc. Everything was a teetering pile of barely-works - matching probe/response to the timing of their particular APs, recovering from unrecoverable errors by resetting the stack to a previous known state, roaming when their firmware didn't report enough AP info to really know where they were going. I still have a box of those Chinese APs in a closet somewhere.


I guess mines was back when I used to do reverse engineering. It was when I was able to patch VRay rendering system for 3dstudio max. At the time vray had all nasty checks that would rendering artifacts randomly. Once you thing it was patch correctly it would mutate on you and produce watermarks or just a black screen.Not to mention if not patch correctly the person information that leaked it could of been recovered. For me this was a pretty big accomplishment when I was around 15-16, fun times.


I've built a maze traversing car with an arduino. The maze was just black lines on a large white sheet. The code was fairly straightforward (Depth First Search), but this was my first time working with hardware. The car used a row of LDRs to know its ___location relative to the lines of the maze which was the main source of issues. After countless rebuilds of the car and tweaks in the code we finally got the car to work which was an extremely gratifying moment.


Building a fully functional connection-oriented transport layer protocol. Looking back it was not even that big of a deal, but my first serious project as a student. Not even close to the complexity of TCP, but it implemented all the basic features, including state of the art ARQ and rudimentary congestion control (TCP Reno). After about 2 months working on it EVERY DAY (an so many sleepless nights), I remember I almost cried when it first passed all the tests.


A professional grade USB audio driver for Windows, to stream multichannel audio at low latencies to connected USB hardware. Also firmware for said hardware, with DSP processing (equalizers, filters, mixing, interpolation, metering). Completed with zero prior experience, and a little outside help.

Very proud of the result :-) https://www.youtube.com/watch?v=SmMHsIETBwA


(1) Saving FedEx. The founder of FedEx, F. Smith, was in his corner office for an afternoon to work out a schedule for his fleet of airplanes. At the end of the day he walked out tired and said, "We need a computer.".

Someone I knew in college heard Smith, called me, and I flew from Maryland to FedEx and was hired to solve the problem.

Back in Maryland, we had some meetings, including one in a conference room at the Georgetown library, with various approaches to the problem, none good. There was some politics involved.

Really, the project was just mine, so I thought of a first-cut approach and attacked the problem; I was still teaching computer science at Georgetown. In six weeks I had a program, turned in the grades for the courses I was teaching at Georgetown, drove to Memphis, rented a room, tweaked the program a little, and declared it done.

Soon too many on the Board were saying that there could be no good solution to the scheduling problem. So, one evening a senior VP and I used my program to schedule the whole planned fleet into all the planned cities, printed out the schedule, and submitted it.

At a senior staff meeting, Smith's reaction was, to paraphrase, "An amazing document; solved the most important problem facing FedEx".

Our two Board representatives from Board member General Dynamics went over the schedule carefully and announced, "It's a little tight in a few places but it's flyable.". The Board was happy and a big chunk of equity funding was enabled.

So, the software solved the practical problem at the time and in time. But the software was not a grand solution to everything in fleet scheduling.

The hard parts were, (a) the politics, (b) designing a program that would be powerful enough to solve the practical problem at the time but easy enough to write to solve the problem in time.

Later I attacked the problem via 0-1 integer linear programming set covering; but the politics got much worse; the promised stock was very late and still just a handshake deal with Smith with only my offer letter on paper; my wife was still in Maryland; and I wanted either the stock or a Ph.D. Smith's last promise of stock was $500,000 worth that might be worth 1000 times that now, but it was just a handshake deal. So, left for a Ph.D.

(2) Nuclear War at Sea. To support my wife and I to the end of our Ph.D. degrees, I took a part time job in military systems analysis. At one point the US Navy wanted to know how long the US SSBN fleet would last under a special scenario of global nuclear war but limited to sea. They wanted their answer in two weeks.

There was an old paper of B. Koopman that argued that 'encounters' at sea between Red and Blue weapons systems would form a Poisson point process. I added on a little and got a continuous time, finite state space Markov process. There is a closed form solution as a matrix exponential, but due to a combinatorial explosion the state space was far too large for doing anything numerical with that solution.

But it was easy enough to generate sample paths, so I wrote software to generate and average, say, 500 sample paths. The work passed a technical review by a famous mathematician, and the Navy got their results on time. The next day we took a vacation in Shenandoah, and my wife got a vacation she wanted on time, too!

(3) Winning a Contract Competition. There in Maryland I was in a software house working for a US Navy lab. We were in a competition for a software development process. Part of the work was to measure the power spectrum of ocean wave noise. I got smart on power spectral estimation and wrote some illustrative software of passing white noise through a filter with a specific transfer function and accumulating the empirical power spectrum of the output of the filter. The software showed what the math claimed: At the low frequencies the project wanted, an accurate power spectrum needed a surprisingly long interval of data. The software showed that with short intervals of data, the estimated power spectrum could have big peaks that, really, were just sampling noise that would go away as the length of data increased.

So I called one of the customer's engineers and showed them the news. As a result, our software house won the competition.

(4) Anomaly Detection. I was working on applying artificial intelligence to the monitoring and management of server farms and their networks. One of the main techniques was 'thresholds', and I wanted something better.

So, I put my feet up, popped open a cold can of Diet Pepsi, reviewed some of my best graduate school material including some of ergodic theory, had some ideas, wrote out some theorems and proofs, and wrote some corresponding software.

So, suppose are given a system to monitor. Suppose 100 times a second we get data on each of 15 variables. Collect such data as 'history' data ('learning data') for, say, three months (assume a stable server farm or network). Study this data, let it 'age', and be fairly sure that the system being monitored was 'healthy' during that time. Then in real time 100 times a second, report if the system is 'sick' or 'healthy'. Have false alarm rate known in advance and adjustable over a wide range. So, have a statistical hypothesis test that is both multi-dimensional and distribution-free, one of the first such. Although there is not enough data on actual anomalies to apply the Neyman-Pearson result, do process the data in a way to promise relatively high detection rate for any selected false alarm rate. Find a way to do the computations quickly and efficiently.

I did those things.

Some office politics got involved: Suddenly I was told to write a paper on my work and that the company would review the paper. If the paper was not publishable, then I would be fired. I wrote the paper; the company claimed that the paper was not publishable; and I was fired.

The guy who walked my out the door had been in management for about 20 years but was demoted out of management the next day. The main guy after my ass was two levels higher up, pissed off at me for no good reason, two weeks later was demoted one level in the organization chart, ws given a "performance plan", which he failed, and was demoted out of management.

The company wrote me a letter giving me intellectual property rights to my work. Out of the company, I submitted the paper for publication. The paper was published in a good Elsevier journal, the first journal to which the paper was submitted, without significant revision (one reviewer wanted to change how the first line of each paragraph was indented). The Editor in Chief of the journal invited me to give the paper at a conference he was running -- I declined.

(5) Internet Search, Discovery, Recommendation, Curation, Notification, and Subscription. My view is that current Internet search techniques are effective for only about 1/3rd of Internet content, searches users want to do, and results they want to find. I want the other 2/3rds.

With my feet up again and another cold can of Diet Pepsi, I had some ideas and wrote the corresponding code. Now all the code is written for a corresponding 'search engine' Web site except I have a nasty little bug having to do with class instance de/serialization. Should fix that today. Should be live in a few more months.


do you post under handle sigmaalgebra on avc.com? if so, have always really enjoyed reading your take on stuff.


There may be several people, all anonymous, writing similar material.


I ported X11 server + SSH + FreeNX compression protocol stack to a non-posix cooperatively-threaded OS (BREW). Even worse than that, the toolchain was broken. The generated binaries were corrupted when an identifier bigger than 255 characters was used. Thanks to the wonders of STL and its templates there were several of those. And to make things more interesting there was no debugger, only a broken simulator on top of Windows.


Oh god, BREW... My condolences, and serious props for managing to port such a complex stack to it!


- Writing a MIDI player on ARM. Not this fancy Android thing, it was for a simple (and old today) ARM board,, no OS and I think 64k of RAM (or maybe 512k, not sure). Doing the audio synth and everything. (this was for a univ. project), it wasn't great, sure, but it did the trick

- How to store data in an embedded Linux system with no writable partitions. (That's more of a hack than everything, still, it worked!)


Writing a Forth vm in AVR assembler, from knowing nothing about either. Or Writing a 9P file server in Postgres Pgsql, which is how I learned 9P.


Learning to crack[1] license-controlled software and reverse engineering videogame console data. e.g. leader board hacks[2]

1. https://www.youtube.com/watch?v=S5VlvrlTNmM

2. https://www.youtube.com/watch?v=lxj11crtQjA


Automatically invalidated cache for ORM for e-commerce site. Whole thing was in PHP and my cache was grafted into the site that was growing organically for few years. And the site still worked and they could ditch half of their servers thanks to that.


I solved the travelling salesman problem in constant time but this margin is to small for me to write my answer.


1 - Getting the Celestron Ultima 2000 "goto" telescope debugged and working. There were all kinds of problems: the software was a home-brew multi-threaded real-time operating system that ran on tiny little inexpensive processors, and had a lot of bugs. The closed loop control algorithms were broken. When we started, the telescope would go into "paint can shaker" mode when we told it to move to a star. The mechanical design had a lot of problems and went through several redesigns. There were problems with the encoders (sinusoidal errors too large to ignore, which ended up requiring software calibration and compensation), and the main axes were not sufficiently close to perpendicular, also requiring per-system calibration and software compensation. There were electrical problems requiring strategically placed capacitors and things. For crying out loud, we even had a Y2K problem! The scope had difficulty tracking objects over the zenith. The gearboxes had a significant amount of backlash, requiring software compensation. And on and on. It was an extremely difficult system to get working, but in the end it was the most satisfying thing I'd done in my career to that point. (And, it was nice to do something as a software guy for my father, Tom Johnson, the founder of Celestron.)

2 - The Integrated Medical Systems LS-1, a portable "ICU in a box" done under contract for the US Army. The box integrated multiple medical device subsystems (ventilator, ECG, infusion pumps, smart battery charging, invasive and non-invasive blood pressure, SpO2, etc. etc.) They communicated internally over 100-base-T cat-5, and also had to interface to hospital IT via wifi. And, there was a remote interface unit that had to be able to remotely monitor and control the entire system via internet, from the other side of the planet if necessary. We ended up doing a lot of work to ensure that the system performed exactly as expected and required by clinicians, even in these sort of remote-operation scenarios. (The thing the Army wanted was to have video and audio from the bedside to remote clinicians, and have the remote people act like team members that were coordinating their efforts with the bedside clinical team.)

3 - (Totally for the hell of it, in hobby mode) I was reading a bit about Shor's algorithm to use quantum computing to factor products of pairs of prime numbers. One key part of the algorithm depends on FFT's, to help with detection of lengths of cycles. If you have a vector of length N, consisting of K equally spaced non-zero elements, and K evenly divides N, then the FFT of that vector will have N/K evenly spaces non-zero elements, and crucially they will start in the zero'th element of the output vector. (There is no such requirement or assumption on the input side.) Happily enough, this is "almost" true if K does not happen to evenly divide N. But this is a bit difficult to prove. Some reasonably complete and rigorous presentations of quantum computing basically say, "The proof of this is outside of the scope of the present work." So, for fun I took it as a challenge to create an elementary proof of this lemma. It turned out to be quite hard, but REALLY fun and satisfying.

4 - Never been really happy with the standard presentations of red-black tree algorithms. It just felt like there was some underlying simplicity in there that was struggling to get out. So, I created a new formulation of red-black trees and corresponding algorithms. Did a web site (gregfjohnson.com/redblackbuilder.html) that illustrates these algorithms, and supports forward and backward execution and single-stepping through the algorithms. This web site was pretty darn hard to get correct.


I can think of a few cases.

Most of them were reinventing the wheel. :-(

I think the chemists have a saying: "With a few hard and dedicated work weeks in the lab, you might save hours in the library!"


My high school CS teacher always told us, "Weeks of coding can save you hours of designing!"


Yes and then they complain that the published method doesn't actually work.


Going from real development tools to basic web development. Essentially from Delphi to web development, passing by QT/C++ development and with some excursions to android and other things. Journey destination : got really, really, really tired of web development and ran to something without UIs (beyond cli).

Manually making sure your frontend/backend matches. I'll repeat. Manually. Weakly typed. Constant bugs. Constantly seeing parts of your implementation getting deprecated, going unsupported, and of course not matching clients' "hot new thing" (WHY do you need a UI designer, which is a title given to every fuckwit that has ever started photoshop, successfully or -usually- not on an internal order management system ?). Impossible to generate tests that actually catch those bugs. Inconsistent language implementations, to say nothing of dom. Zero support for any kind of legacy thing. Layout engine inferior to what was available before I was born on machines that get their ass kicked by my watch (talking about the NeXT platform).

HTML/Javascripts idea of "cross-platform" does not mean even the fucking basic integer datatype is actually cross platform. HTML UI's don't reflow at all. There are UI engines that reflow pretty badly (VB). There are UI engines that reflow kinda bad (android/delphi, with delphi slightly better). There are UI engines that actually reflow (QT/GTK), I wouldn't say they do it really well, but acceptably. Web is undoubtedly much worse than VB/Delphi in reflowing. Much worse.

Getting 4 buttons stuck to the lower right of the component currently on display, which will push out other things is in-fucking-possible. You see theses written, hundreds of pages, on -get this- getting 2 columns of text displaying side-by-side. Editing a string correctly, knowing it might contain non-latin characters, is impossible. Translating a webpage dynamically from one language to another, impossible (you can change it on the server, which of course means that any strings generated in javascript need a different solution -aargh-).

Getting a redraw loop going for a game is so ridiculously difficult and slow it's painful to write about it. Give me a fucking canvas that calls me when it needs to draw itself, with OPTIONAL double-buffering. Doesn't exist (no, HTML5 canvas is not this, is slow and resource hungry, and this will never change due to how it's designed).

I was happy for 6 months with a small side-scroller implemented on a microcontroller that, together with the LCD, ran for about 6 months on an old CR2025 battery. This thing provides 225 mAh over it's entire lifetime, keeping a game going for 6 months.

The state of the art HTML5 side scroller at http://playbiolab.com/ drains my phone battery in 42 minutes flat. Granted, the music's better. The screen size isn't. That battery is a rechargeable 2300 mAh.

Getting anything remotely resembling productivity going on these platforms (I refuse to refer to web as a single platform) was so hard that I just gave up and ran the other way. And now, of course, every UI toolkit is "legacy" at best, just plain unmaintained at worst, and a non-starter for any project you might want to do.

That is actually considered progress.

How did webdev ever get this fucked up ?


I've been doing web dev for over 15 years now (granted, most of that as a hobby) and I've never really felt this kind of frustration. Weak typing and most of the other things you mentioned have never really bothered me. That's not to say that there aren't challenges, but that's what keeps it fun for me :)


Have you done development outside of web development ? 15 years would make your start time 1999. Did you ever do a large program with a swing gui ? A Delphi program ?

What do you think ? How do your experiences on the web compare to the others ?


Never done any Delphi and only a couple fairly small Swing/GTK apps. Most of my other development experience has been backend systems. I'll agree Web development has a lot of rough edges, but the main reason I love it so much is that it's the easiest way to get code I write in front of others.


One thing I have learned is that before writing much code, one of my main considerations is "how am I going to debug this?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: