I had a running joke at a place I used to work that I should just quit and start my own US based company that "sold" open-source software so that it'd have a US-flagged company behind it. IMO easy way to branch out to commercial instead of just government contracts would be to audit the source code. Probably save a few companies from a left-pad incident (or worse) since they'd have to pull from my servers (or cache mine) and I'm smart enough to recognize that updating leftpad to be nothing would not be helpful to any client
I use TextClipboardHistory because it has Maccy’s current feature set, but is hackable via Lua and is a Hammerspoon Spoon. being a spoon will automatically install it, not clutter my /Applications dir, and easily use VCS to maintain my configuration for it.
This deserves a top-level comment. John Britton's NY Tech Meetup demo of Twilio[0] in 2010 is legendary. The CEO had been doing it in small groups for a little while, but the whole dynamic of it changed in such a large venue. Epitome of "show, don't tell." Hard to overstate what an impact it had on the company at the time (I think we were about 25 employees).
Effectively yes. He did let me know that if it didn't work out that I would bear the consequences :-). By that time in my career I knew that you're only as good as your previous decision, good or bad.
It became NetApp's best selling product at the time. I don't deserve any credit for that of course, the folks to did the work to get it out the door and sell it get the credit. My job was to do what the Army engineers do, land on the beach and clear the obstacles between the beach and the objective so that the main force can do what they came for.
When I interview managers I ask them why they want to be a manager. It is not uncommon for them to say, "Because I want to be able to make the right decisions." And I ask them, "And if they are the wrong decisions, are you prepared to be told to leave?" It is a good litmus test.
This is one of those interesting paradoxes that I haven't figured out yet.
Owning the legacy software that runs the business tends to provide job security at your current job, but can hinder your professional growth at both your current job as well as any future jobs.
Getting on to projects that are intended to replace legacy software tends to get you a lot of positive visibility politically and the ability to learn new technology and skills. If the new project fails there tends to be little fallout from the failure, since so many people are attached to the project at all levels. The goal posts will move so it can be reframed as a success.
If legacy software fails it usually means long hours and a lot of people nervously asking you when it will be fixed.
Unfortunately there's just not a lot of value to being able to put on your resume that you're an expert at an older language that nobody has heard of, built on an in-house framework that will never be used outside of your current company.
As an aspiring woodworker and handyman, I just want to suggest This Old House Insider [1]
The subscription gives you access to the entire This Old House and Ask This Old House library. They're in the process of adding all New Yankee Workshop episodes as well as PDF plans of those projects (normally $14.99 each). It's become my Sunday morning go-to now.
The most recent episodes of the current season are also available on PBS Passport which you can get for a $5 monthly donation to your local station.
Edit: It's something like $95.60/year. I've been getting it as a gift so didn't realize how bad the signup form was. Also, they don't have any app/roku support yet so I cast the video from Chrome.
I've written documents for Jeff, and IMO, the six-page narrative memo is a key part of Amazon's success. It's so easy to fool both yourself and your audience with an oral presentation or powerpoint slides. With narrative text that has to stand on its own, there is no place for poor reasoning to hide. Amazon's leadership makes better decisions than their competitors in part because they are routinely supplied with better arguments than their competitors.
"Writing is nature's way of letting you know how sloppy your thinking is." -Dick Guindon, via Leslie Lamport
I don't want to hijack the thread subject but here are my thoughts on the usefulness of fuzzing of safe languages.
Even in the absence of memory corruption bugs there is a subclass of bugs that can emerge in any general-purpose language, like slowness/hangs, assert failures, panics and excessive resource consumption.
Barring those, you can detect invariant violations, (de)serialization inconsistencies (eg. deserialize(serialize(input)) != input, eg. see [1]), different behavior across multiple libraries whose semantics must be identical (crypto currency implementations are notable in this regard as deviation from the spec or canonical implementation in the execution of scripts or smart contracts can lead to chain splits).
With some effort you can do differential 64 bit/32 bit fuzzing on the same machine, and I've found interesting discrepancies between the interpretation of numeric values in JSON parsers, which makes sense if you think about it (size_t and float have a different size on each architecture, causing the 32 bit parser to truncate values). This might be applicable to every language that does not guarantee type sizes across architectures like Go (not sure?), but I haven't tested that yet.
You can detect path escape/traversal (which is entirely language-agnostic but potentially severe) by asserting that any absolute path that is ever accessed within an app has a legal path, or by fuzzing a path sanitizer specifically.
And so on.
Code coverage is the primary metric used in fuzzing, but other metrics can be useful as well. I've experimented extensively with metrics such as allocation, code intensity (number of basic blocks executed) (which helped me prove that V8's WASM JIT compiler can be subjected to inputs of average size that take >20 seconds to compile), and stack depth, see also [2].
Any quantifier can be used as a fuzzing metric, for example the largest difference between two variables in your program.
Let's say you have a decompression algorithm that takes C as an input and outputs D. Calculate R = len(D) / len(C), so that R is the ratio between compressed input and decompressed output. Use R as a fuzzing metric and the fuzzer will tend to generate inputs that have a high compressed/decompressed size ratio, possibly leading to the discovery of decompression bombs [3].
Wrt. this, libFuzzer now also natively supports custom counters I believe [4].
Based on Rody Kersten's work I implemented libFuzzer-based fuzzing of Java applications supporting code coverage, intensity and allocation metrics [5], and it should not be difficult to plug this into ClusterFuzz/oss-fuzz.
Feel free to get in touch if you have any questions or need help.
I have been using this custom host file for a few months and it works like a charm. Just have to update it from time to time (but it can be automated).
"This repository consolidates several reputable hosts files, and merges them into a unified hosts file with duplicates removed. A variety of tailored hosts files are provided."
Pro tip: I was a neurosurgical anesthesiologist at the University of Virginia Health Sciences Center for twelve years (1983-1995). Every now and then I'd get a call from our billing office that they'd received a letter from a patient I'd cared for who was too poor to pay what our department had billed (usually many thousands of dollars: I had NO IDEA what our charges were, BTW, nor did any of my colleagues in the department). Every time that happened, I'd go over to the billing office and — without bothering to read the letter — tell our billing chief to waive all anesthesia-related charges. In other words, by my initialing a form we had, their balance due us was 0. Try it, you never know, it might work for you.
*navel. Looking at one's own stomach, an internally-focused meditation. Nothing to do with the sea :)
But yes, it does seem rather obvious.
However, I do think it's illuminating in that this business practice is really only relevant to big companies. It's more effective the bigger you are. If you can have a team of a dozen programmers, testers, and sysops engineers employed full-time to make 10,000 other people more effective, your company will be more efficient than mine. A small business will be much better off spending a couple hours creating a website on Squarespace and calling it good, because the ROI just is not there. Software scales, and this makes big companies bigger.
Yes, at FedEx, we considered that
problem for about
three seconds before we noticed
that we also needed:
(1) A suitable, existing airport at
the hub ___location.
(2) Good weather at the hub ___location,
e.g., relatively little snow, fog,
or rain.
(3) Access to good ramp space, that
is, where to park and service the
airplanes and sort the packages.
(4) Good labor supply, e.g., for
the sort center.
(5) Relatively low cost of living
to keep down prices.
(6) Friendly regulatory environment.
(7) Candidate airport not too busy,
e.g., don't want arriving planes
to have to circle a long time
before being able to land.
(8) Airport with relatively little
in cross winds and with more than
one runway to pick from in case
of winds.
(9) Runway altitude not too high,
e.g., not high enough to restrict
maximum total gross take off weight,
e.g., rule out Denver.
(10) No tall obstacles, e.g.,
mountains, near the ends of the
runways.
(11) Good supplies of jet fuel.
(12) Good access to roads for
18 wheel trucks for exchange
of packages between trucks
and planes, e.g., so that some
parts could be trucked to the
hub and stored there and
shipped directly via the planes
to customers that place
orders, say, as late as 11 PM
for delivery before 10 AM.
So, there were about three candidate
locations, Memphis and, as I recall, Cincinnati
and Kansas City.
The Memphis airport had some old
WWII hangers next to the
runway that FedEx could use
for the sort center, aircraft
maintenance, and HQ
office space. Deal done --
it was Memphis.
That's how the decision
was really made.
Uh, I was there at the time,
wrote the first software
for scheduling the fleet,
had my office next to that
of founder, COB, CEO F. Smith.
The post is good but just scratches the surface on running Kinesis Streams / Lambda at scale. Here are a few additional things I found while running Kinesis as a data ingestion pipeline:
- Only write logs out that matter. Searching logs in cloudwatch is already a major PITA. Half the time I just scan the logs manually because search never returns. Also, the fewer println statements you have the quicker your function will be.
- Lambda is cheap, reporting function metrics to cloudwatch from a lambda is not. Be very careful about using this.
- Having metrics from within your lambda is very helpful. We keep track of spout lag (delta of when event got to kineis and when it was read by the lambda), source lag (delta of when the event was emitted and when it was read by the lambda), number of events processed (were any dropped due to validation errors?).
- Avoid using the kinesis auto scaler tool. In theory it's a great idea but in practice we found that scaling a stream with 60+ shards causes issues with api limits. (maybe this is fixed now...)
- Have plenty of disk space on whatever is emitting logs. You don't want to run into the scenario where you can't push logs to kinesis (eg throttling) and they start filling up your disks.
- Keep in mind that you have to balance our emitters, lambda, and your downstream targets. You don't want too few / too many shards. You don't want to have 100 lambda instances hitting a service with 10 events each invocation.
- Lambda deployment tools are still young but find one that works for you. All of them have tradeoffs in how they are configured and how they deploy.
There are some good tidbits in the Q&A section from my re:Invent talk [1]. Also, for anyone wanting to use lambda but not wanting to re-invent checkout Bender [2]. Note I'm the author.
AWS sells optionality. If you build your own data center, you are vulnerable to uncertain needs in a lot of ways.
(1) your business scales at a different rate than you planned -- either faster or slower are problems!
(2) you have traffic spikes, so you to over-provision. There is then a tradeoff doing it yourself: do you pay for infrastructure you barely ever use, or do you have reliability problems at peak traffic?
(3) your business plans shift or pivot
A big chunk of the Amazon price should be considered as providing flexibility in the future. It isn't fair to compare prices backwards-looking: where you know what were your actual needs and can compare what it would have cost to meet those by AWS vs in house.
The valid comparison is forward looking: what will it cost to meet needs over an uncertain variety of scenarios by AWS compared to in-house.
The corallary of this is, for a well-established business with predictable needs, going in-house will probably be cheaper. But for a growing or changing or inherently unpredictable business, the flexibility AWS sells makes more sense!
I had a running joke at a place I used to work that I should just quit and start my own US based company that "sold" open-source software so that it'd have a US-flagged company behind it. IMO easy way to branch out to commercial instead of just government contracts would be to audit the source code. Probably save a few companies from a left-pad incident (or worse) since they'd have to pull from my servers (or cache mine) and I'm smart enough to recognize that updating leftpad to be nothing would not be helpful to any client