Minimalist work sample tests don't work. The work sample test is the test. There is no other criteria involved in whether to hire someone other than their performance on the work sample test. Otherwise it's not a work sample test; it's an intro.
This is a step forward for our industry. For those who don't like it, there are plenty of jobs where you can do the traditional whiteboard hazing.
I've had a lot of success hiring with minimalist work samples.
>Otherwise it's not a work sample test; it's an intro.
Well, like I said, I think the work sample should only be needed to show basic awareness and that the rest of the process should be based upon the course of a discussion around the candidate's relevant experience, the company's needs and intended role for the candidate, etc., so perhaps it's not wrong to call the code sample an "intro". I referred to it as a "litmus test", meaning it's a simple pass/fail; they are either able to throw something together in 1-2 hours showing that they have basic awareness and competence dealing with the problem space, or they're not.
Overall fit is much more important than something like "candidate Y has better indentation habits". Human cycles are thousands of times more valuable than CPU cycles. It's better to choose the good fit candidate whose coding habits can be trained up over the course of his employment than the unstable abrasive candidate whose code ran 1.5x faster than anyone else's.
>whiteboard hazing
Heh, I don't suggest this either.
The frightening reality about hiring is that it can't really be reduced down to a formula. There are subjective judgments that have to be made (in both directions, meaning that you shouldn't be absolutist about things that are correctable in their code sample) if you're going to get good hires and a cohesive team. Questions like "Is this person an active, curious learner?", "Is this person able to field fair critiques of his work product professionally, reasonably, and humbly?", and "Is this person's personality going to mesh OK with the rest of the team within a professional work-day setting?" are all much more important than a raw benchmark of their code sample.
I know that a lot of people don't like that subjectivity, especially when they're on the wrong end of a subjective judgment, but I don't think it's a good idea to discard evaluation on those metrics. We just have to hope hiring managers are using reasonable subjective criteria, and if they're not at the company we want to work for, we gotta move on. Fortunately for us, there are plenty of fish in the sea looking to employ programmers right now.
Our sample projects are meant to tell us that a person is technically capable of doing the individual work for a given job. The work day that comes after answers "can they work well with us?"
The criteria for both project/workday is predefined ahead of time, and the part we spend the most time on. Sample projects have ~40 criteria, nothing quite so minute as "indentation habits" but we do give points for "idiomatic use of golang" on the go work.
What we found when we were putting processes in place is that we just shifted our bias. As an exampe: if a recruiting process judges abrasiveness during a traditional set of interviews, it's probably biased. It's better to put people into a work scenario and judge their actual work and how they interact with other people.
A self-contained project that gives the candidate room to demonstrate a basic grasp on core skills. It should take no more than an hour or 2 hours at most if the candidate chooses to stay within the constraints of the project (in my experience, candidates will often throw in a few extra features since we're not monopolizing their time with a demanding list of requirements, which provides really awesome insight into the candidate). In most cases, if the candidate is capable of a simple project like that, they'd be equally capable of a larger project, and there's no use wasting anyone's time on a larger thing (unless you're focusing on the wrong stuff, like specific knowledge of one particular library).
For my purposes, I like variations on a project that asks them to implement a very basic listing call from a public API. I let them use any language they want. This shows that they can put together a project in some language, look up API and/or library documentation, provision an API key, reference external API docs and code a function that reads against it. I understand that Compose covers a different space and would need to tailor a different minimalist project.
For me, if they're capable of this minimalist code task, it demonstrates basic competency and filters out almost all of the guys that aren't worth wasting time on. It doesn't demand too much of their time and allows them maximum freedom, which allows us to see a lot of information not only about their organizational skills but also about their code ideals. It doesn't penalize a great asset for not having run across a specific language, platform, library, algorithm, or data structure in his/her past; those things can be learned quickly by good candidates. It doesn't depend on knowledge of trivia or number of times they've seen the problem in past interviews/tests. It's not overly academic and doesn't depend on how long it's been since they reviewed their compsci textbook.
The rest of the information needed to make a hiring decision is derived from an extensive discussion on their background in the field, their attitude and goals, and their immediately relevant experience.
This is a step forward for our industry. For those who don't like it, there are plenty of jobs where you can do the traditional whiteboard hazing.