Hacker News new | past | comments | ask | show | jobs | submit login
Here's a puzzle game. I call it Reverse the List of Integers (mathstodon.xyz)
171 points by self on April 12, 2024 | hide | past | favorite | 169 comments



Hi. I'm the post's author. This was something I dashed off quickly, so I was a bit imprecise in the language. Clarifications:

Only positive integers are meant to be allowed. (Zero excluded.)

Combining is meant to work on adjacent pairs of integers.

If this is used in coding interviews, I deny any responsibility. Unless it's used in interviewing me, in which case I will totally take credit.


I got nerd sniped :) and created https://github.com/unrealwill/ReverseListPuzzle to explore all the solutions, with graph algorithms.


Lol, they got me too. Mine has an algorithm to find a solution quickly, and another to find the shortest solution possible. I also created a very rudimentary GUI so that the game could be played. I'll clean the GUI up later though.

https://github.com/davidalayachew/ReverseTheListOfIntegers


Brute-force says the hardest puzzle for n=6 is [1,6,3] with a minimum of 14 moves.

And for n=7 it's [5,4,1,2,7] with a minimum of 26 moves.


Thanks for posting this. I made a playable version of your game too. Bundled inside is algorithms for solving it as well.

https://github.com/davidalayachew/ReverseTheListOfIntegers


For whatever it's worth, zero gets you absolutely nowhere; there's no harm in allowing it.


Should we assume the numbers are initially positive and distinct, too?


They must be, because otherwise you'd never be allowed to split or combine to create the final list.


Technically there are cases in which you can. The repeated values will act like separators that you cannot act on (because once you act on one, as you said one cannot create the final list again), but then it becomes multiple instances of the puzzle which seems pointless


Good point!


I thought the game was cool so I built a little version of it here:

https://blaise.gg/number_game/index.html


Interesting, it appears I've encountered an impossible combination of numbers: [2, 5, 1, 3]

The only possible move is to add 1 and 3. Can't ever split a 2 as that would result in [1, 1], can't split the 5 because that would be [4, 1] or [3, 2], can't split the 3, as that would be [2, 1] or [1, 2], can't add the 5 to anything as that would be larger than 5.


I was given [1,2] in the second run. Even easier to see it's impossible.


Yeah the generation is very simple, 2-4 random numbers between 1 and 9 inclusive. Definitely gives impossible puzzles sometimes (because I don't know a general algorithm for generating possible ones only).


If you only have numbers up to 9 an exhaustive search shouldn't be too bad (should be ~9! states if you memoize), you could implement a solver in Javascript and see whether a solution can be found before you give it to the user.


Couldn't you split 3 into [4,-1] ?


None of the implementations support negative numbers, but the rules say integer which includes negative numbers, so that should be a legal move.


Negative numbers would break the game by making the number of possible splits infinite. E.g. 1 could split into [-2, 3], [-3, 4], [-4, 5], etc.

It also violates Rule 1 because one of the splits is larger than the original number.


Rules say you can’t ever produce a number greater than the original largest number, so the possibilities will always be finite. (fixed number of ways to make a list of distinct integers that sum to N such that all values are <=M).


You're right in the first sentence, but you're second sentence if we're talking about splitting the largest original number itself that way.


They also say that a given integer should be split into two smaller integers.


Thought the same, you beat me to it! Really like the tree-like view, might steal that idea later :-)

https://bewelge.github.io/aNumberGame/ (does not format well on mobile)

Code: https://github.com/Bewelge/aNumberGame


I like that all possible moves are shown at once. Would it be possible to have the sequence be other than [7, 5, 3]?


I added an url parameter so you can load it up with any sequence.

Example for sequence 5,2,7

https://bewelge.github.io/aNumberGame?seq=5,2,7


Thanks!


Fun implementation! There's currently a bug that allows the same number to appear twice after a combination step (I was able to perform 2,3,7,5->5,7,5)


Ran into this as well: I did [8, 1, 4] -> [5, 3, 1, 4] -> [5, 3, 5].


Should be fixed, thanks for the report.


Jumping in with my janky version: http://poyo.co/list-of-integers/


I really like the visual aspect of your implementation


Cool!

Though I say the rule about repeated numbers is a bummer. I can see things might get easier (also it might be a more procedurally easy problem to solve) if this is allowed

Basically I don't think it's a "productive" complication


Without that rule, you could decompose everything into 1's, and then recompose from there. Not the most efficient solution, but a boring one that always exists.


Ok so how about allowing only non contiguous repeated numbers


I think it would be more fun if you only displayed problems that are solvable.


Yeah, I would but finding such puzzles is kind of hard. Might look into it this weekend


I'm sure we could write a brute force tester, and then you would only have to include a list in your webapp. Or harvest people's results


This is neat, but your implementation does seem to allow duplicates sometimes


Should be fixed now, sorry for trouble.


Would be nice if you allowed users to define their own puzzle. I had to set a break point at the right ___location and call startNew([..my array]) to do so.


Added, you can use ?numbers=1~2~3 to specify your puzzle.

Like this:

https://blaise.gg/number_game/index.html?numbers=1%7E4%7E6%7...


Gave me [2,1] which is not possible


It does seem like [n,n+1] is always unsolvable!


[9,8] -> [6,3,8] -> [6,2,1,8] -> [6,2,9] -> [8,9]


Indeed,

   [n, n-1] 
   [n-3, 3, n-1]
   [n-3, 2, 1, n-1]
   [n-3, 2, n]
   [n-1, n]
Works for all n greater than... 6?


Optimizing self-response! That fails for six because n-3 and 3 co-occur. With a slight modification we can make it work for all n>5

  [n, n-1]
  [2, n-2, n-1]
  [2, n-3, 1, n-1]
  [2, n-3, n]
  [n-1, n]
edit: and it's not hard to show that n<6 are impossible, so the solution above is optimal in that sense.


Yeah, the [n,n+1] is only a problem when there isn't enough wiggle room between the numbers. ex: [2,3] and [3,4]

[3,4]

--> [1,2,4] --> 4 can't split because 2/2 and 3/1 are invalid

--> 4 can't initially split because 2/2 and 3/1 are both invalid

[5,6] --> [5,4,2] --> [5,1,3,2] --> [6,3,2] --> [6,5]

[7,8] --> [7,3,5] --> [7,1,2,5] --> [8,2,5] --> [8,7]

When you add more numbers, you need more wiggle room. So, [4,5,6] is problematic and probably [5,6,7].

[4,5,6]

--> [3,1,5,6] --> 6 can't break into 3, nor 5/1. It can do 4/2, but then 5 can't split. 5 also can't split.

--> [4,2,3,6] --> [4,2,3,1,5] --> [6,4,5] --> 4 can't split into 2/2. It can split to 3/1 but then 5 can't split

--> 6 can't initially split at all because 3/3, 4/2, and 5/1 are invalid

Even if [6,7,8] has a solution, I'm sure [6,7,8,9] does not.


A solution for [6,7,8,9]:

6,7,8,9

6,3,4,8,9

6,3,4,8,2,7

9,4,8,2,7

9,4,8,2,1,6

9,4,3,5,2,1,6

9,7,5,2,1,6

9,7,5,3,6

9,7,8,6

9,5,2,8,6

9,5,2,1,7,6

9,5,3,7,6

9,8,7,6


Wow, well done! I found [6,7,8] as well:

6,7,8

6,7,5,3

6,7,1,4,3

6,8,4,3

6,8,7

6,3,5,7

6,2,1,5,7

8,1,5,7

8,6,7

8,4,2,1,6

8,4,3,6

8,7,6


Nice. It does let me make the same integer twice though.


Should be fixed now, sorry for trouble.


This can be transformed into a problem with pegs and moving blocks.

Like tower of hanoi[1], but you can add or remove empty pegs, blocks are the same size and can be stacked in any order, you can move as many blocks as you want and you cannot have towers with the same amount of blocks.

Unless you want to deal with negative integers, then it would get more tricky.

[1]: https://en.m.wikipedia.org/wiki/Tower_of_Hanoi


The charm of towers of Hanoi is that you can make a physical version where the legality of moves is easily verified because of the differently sized blocks.

I think the “you cannot have towers with the same amount of blocks” rule will make this a lot less charming, certainly if any of the numbers are larger than, say, 10.

There also is the issue of adding pegs, but that’s solvable by fixing the number of stacks (the triangular number for n is about ½n², so it certainly need not be larger than twice the square root of the number of blocks).

You can even make it smaller to get a variation on this game where the length of the list is limited.


You can make a physical version by having one fixed height pole for each integer up to the max, and then an equal number of post holes. Eg. If you wanted to support a game up to the integer 10 you would need 10 posts with lengths from 1 to 10 units, and 10 post holes.

The biggest issue with this game is that there's no guarantee that any arbitrary starting state has a valid solution. A much needed improved would be a simple rule for guaranteed reversible starting formations (if one even exists).


> If you wanted to support a game up to the integer 10 you would need 10 posts with lengths from 1 to 10 units, and 10 post holes.

You’d also need a way to represent the order of the posts (the game is about a list of integers, not a set, so you can’t move from [4,5,6] to [10,5], for example)

I think a halfway decent visualization is one where you have n different cylinders of lengths 1 through n and a gutter of length of the sum of the numbers you start with (in the [4,5,6] example that would be 15). Next, place the cylinders for the starting position in the gutter in the order given, so that it completely fills it. Keep the others elsewhere.

Allowed moves then are:

- replace two cylinders that are side by side in the gutter by one of the sum of their lengths that you have available.

- replace a cylinder in the gutter by two available cylinders that together have the same length.

I think this way to visualize the game also might lead to a physical construction, but I don’t see on yet.


I really like this. I could see constructing a physical artifact for a specific case of this game.

According to another comment, the best puzzle with a high number of 6 is [1,6,3] with a minimum of 14 moves.

This would mean you have 6 total rods, and two gutters. The puzzle gutter with a length of 10, and the storage gutter with a length of 11.

If you want more visual symmetry a high number of 7 allows you potentially 2 gutters of length 14.


Thinking it through a bit more, I would consider a variation on this game, where you don’t have a list but a ring buffer.

Then, you can replace the linear gutter by a circular one and replace the cylinders by parts of a torus. That would make for a cooler look of the game (on the other hand: how would you easily see you’ve completed the puzzle?)

Unfortunately, you won’t be able to put the ‘spare’ parts in a concentric circle as that would have a different radius.


Post holes are useless, because you can split left and then split back to the right. So all you need are the n tokens.

Hanoi isn't a good model for this, because the posts are not fixed in place and the units are all identical.

It's a bad physical game because the player has to do all the work to enforce the rules. The environment doesn't provide any useful assistance.


Was just thinking this. There’s some differences, like the integers don’t have to be ordered biggest to smallest at any time, but it very much feels like Towers of Hanoi would be a good starting place to solve this.


No, not at all. In a Towers of Hanoi model, this is a completely trivial game.

You'd reverse [7, 5, 3] by picking four discs off the first peg and putting them back down on the third peg.

The rules here only allow you to move stuff a single peg away from its origin.


In Towers of Hanoi, you're only allowed to pick up one disc at a time, so it's not completely trivial. It's simply operation intensive... kinda like Reverse the List of Integers

[1] https://en.wikipedia.org/wiki/Tower_of_Hanoi


They're not suggesting picking up multiple discs at a time:

        [3,2,1][][]
    =>  [3,2][][1]
    =>  [3][][1,2]
    =>  [][][1,2,3]
In effect, they're just observing that the algorithm "while x := A.pop(): B.push(x)" reverses A onto B.


Of course I'm suggesting picking up multiple discs at a time. That's the whole idea.

Compare thih9's comment:

> Like tower of hanoi[1], but you can add or remove empty pegs, blocks are the same size and can be stacked in any order, you can move as many blocks as you want and you cannot have towers with the same amount of blocks.

All my comment did was to point out that this description doesn't work, because the rules here are not similar to the rules of Towers of Hanoi.

Under the rules of the original comment, here's how you reverse the list [7, 5, 3]:

    +++++++    +++++    +++
    +++(----)  +++++    +++(++++)
It's a simple, one-step process, and this will be true for any list of three integers. A list of four or five will take two steps, a list of six or seven will take three, etc. In all cases, reversing the list is completely trivial, because thih9 introduced a rule, allowing you to simply swap two numbers, that isn't present in the original ruleset.


Yeah... Make a browser game out of it?


It's not possible in general, e.g. [3, 2, 1] can't be reversed.


[3, 2, 1] has no valid modifications by the rules given.

I wonder how little wiggle-room there needs to be for a solution to be possible? Does [4, 2, 1] have a solution? You can get to with [4, -1, 3, 1] with one step, but I'm not sure where you can go from there.


To be pedantic, this isn't the operation he gave - "split an integer into 2 smaller integers" is the operation, but 2 into [3, -1] is splitting it into a larger and smaller integer.

Read literally, the numbers are constrained to not jump over the 0. It might depend on interpretation (absolute value or distance from positive infinity?); but negatives are tricky to handle with these rules and I think they might be illegal.


> To be pedantic,

I don't think that's pedantry. That's just a straightforward reading of the rules, and I missed the word "smaller", somehow. *shrug*


Are you being pedantic about pedantry?


> You can get to with [4, -1, 3, 1] with one step, but I'm not sure where you can go from there.

Well, it's easy to show that you can't go anywhere from there:

1. You can't combine the 1 with the 3, because you'd have two 4s.

2. You can't combine the -1 with the 4, because you'd have two 3s.

3. You can't (usefully) combine the -1 with the 3, because that just means you never took your previous move of splitting that pair out of the 2.

Those are all the possible applications of the combining rule. You can't apply the splitting rule either:

1. Splitting the four into positive integers will cause a duplicate.

2. Splitting a negative number out of the four will violate the rule against numbers larger than 4.

3. Splitting the 3 has the same problems. You can't take out a negative number without leaving a residual of 4 or more, and you can't take out a positive number without leaving a residual of 1.

4. Taking a -1, -2, or -3 out of the 1 will leave a duplicate, and taking a positive value out of the 1 is impossible.

5. You can split the -1 into [-3, 2]. But at that point you need to get rid of the 1 in final position so that you can combine the -3 into the 4, and there isn't a way to do that. Once you've gotten here, your list already contains every legal positive value.


Huh. There's bound to be some weird constraint there.

E: Looking at the sibling comments, I'm disregarding negative numbers here. Probably that breaks the line of thought here.

For example, a sequence made up of all numbers from 1 to n is unsolvable. A sequence of [1, 2, ..., n-1, n+1] for n >= 3 also seems to be unsolvable, since no number up to n-1 can split into a smaller number (they are all in the list) and n+1 also cannot split, because 1 is in there.

So it seems that there would be an unsolvable class of lists along the lines of [1, 2, ..., k, ..., n-1, n+k] by that argument, because the n+k cannot split ever, because all numbers up to k are in that list.

And I guess for any unsolvable list with a max value of M, you can add any number below M into the unsolvable list, because that just removes moves.

Ah. Nerdsniped :)


Technically there are valid modifications, you can modify [3,2,1] into [3,2,0,1]; but yes, this doesn’t change much.

Edit: disregard that, indeed the integers need to be smaller.


Technically 1 is not smaller than 1 ;)


The article specified a list of positive integers, and you can’t go smaller than the smallest initial value.


The article says you start with a list of positive integers.

It also clearly shows right in the first example of the first rule that you can go smaller than the smallest initial value: [7, 5, 3] -> [6, 1, 5, 3] is the first legal move it shows, and you'll notice that 1 is smaller than 7, 5, or 3.


Where does it say that? It only says you can't go larger than the largest value.


The rule states " 1. Split an integer into two smaller integers."

If you split 5 into 6 and -1... 6 ain't smaller than 5.


You said "you can’t go smaller than the smallest initial value". If I have [5, 3] I can do [4, 1, 3], which you assert I can't.


The constraints stated:

  * You can never make an integer greater than the largest integer in the original list.
  * You can never make a move that results in the same integer appearing in the list more than once.
You can make a negative number and you can go lower than the smallest but not greater than the largest, so you're wrong on both counts.


However, as a result of #1, you in fact cannot make negative integers. It’s not possible to split a nonnegative integer into two lesser-or-equal integers such that either of them is negative.


> largest integer in the original list

So if the original list was [10, 1] and negative numbers were allowed, then [10, 2, -1] would be allowed.


I believe Sharlin is correct here. Rule #1 states `Split an integer into two smaller integers.` 1 -> [2, -1] breaks this rule due to 2 not being smaller than 1.


Oops, indeed!


It does feel implied albeit forgotten given the constrained nature of the task. Opening up the door to negative numbers feels like cheating.


the only moves available are:

1. Split an integer into two smaller integers.

2. Combine (add) two integers into a larger one.

Both suggest that negative numbers are not possible.


Any combination where all consecutive numbers are present is unsolvable. That is [1..n], in any order.

That's because it is impossible to split any number, because it will create a duplicate, it is also impossible to merge, because it will either create a duplicate or a number >n

[2..n] (in any order) is also unsolvable for the same reason. And certainly many others ex:[1,2,4], finding if a problem is solvable is interesting in itself. A good solver should nor only find the shortest solution(s) if there is one but also determine if it is solvable. An interesting problem in itself.


It seems that if the numbers are put in ascending order as min-max, then there would have to be at least two gaps in the sequence to make meaningful progress?


Technically the rules never state that when splitting an integer you have to put the new numbers in place of the old one, so...

[3, 2, 1]

[0, 2, 1, 3]

[2, 1, 3]

[0, 1, 2, 3]

[1, 2, 3]

I.e.: Split 3 into 3+0. Combine 0+2. Split 1 into 1+0. Combine 0+3 because rules don’t prohibit that either.


You're supposed to split an integer into two *smaller* integers.

3 is not smaller than 3, so you can't split 3 into 3+0.


Oh, good spot. Yeah, no luck then.


Yeah, the problem is slightly underspecified, though it’s pretty obvious that you’re not allowed to do swaps as that obviously makes the problem trivial.


Not trivial - I do not see how to solve [3, 2, 1] without this kind of swapping. Perhaps we need to clarify the rules for a starting point first.


I mean that if you allow swapping then every instance of the problem becomes trivial because swapping is all that reversing is. Doesn’t mean the problem as intended is always solvable.


Learn Forth. Consequent swappings will reverse an stack.


Neat little problem! You could do SAT-solvers and stuff, but it's also obviously a graph search problem: each state is a vertex, each edge is an operation that leads to another vertex. So, obviously: Dijkstra works (or really just BFS, since the edges are unweighted), but not the fastest in the world.

The fun one to play with would be A-star, but you'd have to find a suitable heuristic. One obvious candidate is that if you have target list that is X long and a current list that is Y long, then you need at least abs(Y - X) steps to get there (since each step either adds or removes a number). That's admissable and would probably speed up the search quite a bit compared to regular BFS, but you could probably do a lot better. I'm thinking a heuristic based on the number of inversions or something.

I would suspect if you found a really good heuristic for A-star, that's about as good as you're going to get. Though maybe there's something more clever I'm not spotting.


You could just run bfs from both starting and ending nodes until you find a node reachable from both the starting and ending nodes.


https://ideone.com/MGFwdo

I am running BFS from only the starting node because the graph is symmetrical.


Heuristic: Longest common subsequence of the current state vs the target?


Start with a list of positive integers, (e.g. [7, 5, 3]) and your goal is to make the same list, in reverse ([3, 5, 7]).

Operations:

1) Split an integer into two smaller integers. (e.g. [7, 5, 3] → [6, 1, 5, 3]) 2) Combine (add) two integers into a larger one. (e.g. reverse the last e.g.)

Restrictions:

1) You can never make an integer greater than the largest integer in the original list. 2) You can never make a move that results in the same integer appearing in the list more than once.


Oh no, 2 dimensional dynamic programming? I have seen similar questions in competitive programming.


Can you only add adjacent integers?


The rules are too restrictive. Even a basic [1, 2] is not solvable.

For this game to be popular, the difficulty should grow (generally) with the size of the list and the relative differences between numbers.


Here's a gist in Prolog that I believe solves this puzzle: https://gist.github.com/deosjr/7314659509333ee2ad67dbf276e8d...


Here's my Prolog code which solves (small inputs) of the puzzle, taking the core grammar of moves//3 from the Water Jug solution in Markus Triska's Power of Prolog video "Sparrows on Eagles: Delegate your work to Prolog!"[1] (I couldn't have written that style myself from scratch):

    :- use_module(library(dif)). % Sound inequality
    :- table split/2, merge/3, moves//3.
    
    % Replace one number H from a list with
    % two A and B which sum to that number.
    split([], []).
    split([H|T], [A,B|T]) :-
        between(1,H, A),
        between(1,H, B),
        dif(A,B),
        H is A+B.
    split([H|T], [H|T2]) :-
        split(T, T2).
    
    % Merge two adjacent numbers A and B from a list by
    % summing into H, unless that would exceed list Max.
    merge([], _, []).
    merge([H|T], Max, [H|T2]) :-
        merge(T, Max, T2).
    merge([A,B|T], Max, [H|T]) :-
        H is A + B,
        H =< Max.
    
    % Describes moves from starting state S0
    % to Target state, by splitting, merging,
    % and checking for duplicates using sort.
    moves(S0,   _, Target) --> { member(Target, S0) }.
    moves(S0, Max, Target) --> [Ns],
        { select(Ns0, S0, S),
          (split(Ns0, Ns)
          ; merge(Ns0, Max, Ns)),
          sort(Ns, NsSet), same_length(Ns, NsSet) },
        moves([Ns|S], Max, Target).
    
    
    solve(S0, [S0|Moves]) :-
        max_list(S0, Max),
        reverse(S0, Target),
        phrase(moves([S0], Max, Target), Moves).
        
e.g. with the query:

    ?- between(1, 20, MoveCount),
       length(Moves, MoveCount),
       solve([5,1,20], Moves).
Answer:

    Moves = [[7, 5, 3], [7, 1, 4, 3], [2, 5, 1, 4, 3], [2, 6, 4, 3], [2, 1, 5, 4, 3], [2, 1, 5, 7], [3, 5, 7]],
    MoveCount = 7
It will run in https://swish.swi-prolog.org/ clicking the 'Empty' then 'Program' and pasting the code in, querying (without the ?- and trailing .) in the lower right.

Taking the technique from the video; the query uses length/2 to lengthen the list of answer moves one at a time before the answer is sought, this code does iterative deepening and will find the shortest sequence of moves first.

[1] https://www.youtube.com/watch?v=vdabv9EkYrY


Oh boy, give it three months and we're gonna start being asked this in coding interviews.


The rules for operation 2 didn't mention that two integers need to be next to each other (vise versa for operation 1). Is it a requirement?


I think it's indirectly indicated by using the word "split".


When I split an atom, The pieces go pretty far apart


And when you split plasticine it doesn't. Do numbers have tightly bound huge amounts of energy inside them?


Yes, this comes up in a reply further down the page. (Everyone, please upvote the parent comment to save other readers time. I was puzzled too.)


I made a completely graphical representation of this game in P5.js - it turned out quite intuitive I think.

The integers are represented by the number of edges in each straight section.

https://editor.p5js.org/semi-extrinsic/sketches/IKOjwE7Vb


I like it, but it strikes me that there are actually not that many valid moves you can make in the example problem. Maybe this isn't true in other examples, but I found most of my time thinking about this was simply realizing that most of my desired moves were off limits. Perhaps that's the point, it just struck me.


Here are the maximum number of steps required through 10, and likely maximum number of steps required through 12:

6: 14 7: 26 8: 74 9: 86 10: 126 11: 106 (?) (full state space not explored) 12: 130 (?) (full state space not explored)


Just wondering... How do you calculate the minimum number of steps fast enough?


To solve a particular position, I just use level-by-level breadth-first search until a level contains two values that are reverses of each other.

To explore the entire state space of possible initial positions, I use a number of tricks; I'll be writing that up pretty soon. I've explored through n=12 already, and expect to finish n=13 and n=14 pretty soon. I'm not sure if I'll be able to do n=15.

And by the way, I've found a position for n=14 that requires 206 moves to solve.


Finished with a full state-space search of 11; worst case is indeed 106 (vs 126 for 10).


Finished with a full-state search of 12; worst case is indeed 130. Found a game for n=14 that takes 172 moves: 6 11 8 2 7 10 9 12 4 1 14 3.


Coming soon to a coding interview near you!


Probably solvable with hashmap then. Something make hash bucket-sum dependent and split/resize until it's done.


My thoughts exactly.

It would 100% fit in with the interviews I've had in the past year.


Geez. I mean, I could do this, but probably not in a 45 minute interview without knowing how to solve the problem ahead of time.


leetcode hard++


I feel like those last two rules would make the puzzle impossible to solve for most sets of inputs. E.g. in [3, 2, 1] all possible splits are illegal.


Squinting at this... Feels like there is some relationship to asking if you can tile a triangle into integer-length non isoceles triangles.

It feels like it is typically going to be impossible. Most puzzles will seem to have only a few legal moves at a time, all of which will either loop back to wehre they are or lead to victory.Generally speaking though,odd numbers are useful.


My first reaction to this was, "neat coding question for interviews". I started to try to solve it for a few seconds, then thought of when I've asked coding questions. Invariably starting by saying "The goal is to see how you approach and work through the problem, and less so whether you come up with the optimal solution". Then I thought, what if the interviewer and interviewee tackled the problem together, both coming in cold? And that was stated as such, up front? It would be much closer to the real world of tacking novel problems as a team. Taking the pressure of "I need to come up with the answer" off the interviewee should result in real focus on "working through the problem", and decrease interviewee stress; if the interviewer doesn't know the answer yet, how can they expect the interviewee to come up with one?

There are a couple of obvious downsides like: What if it turns out the problem is too trivial? Move on to the next one. What if the interviewee has already encountered the problem before? No different than if the interviewer posed an already-solved problem.

It would be incumbent upon the interviewer to not reveal a solution if they come up with it first of course. And the evaluation of "how you approach and work through the problem" is less definite than whether an answer (brute-force or optimal) was achieved; but if approach is indeed the important thing, that needs to be evaluated regardless. I'm sure there are other downsides I'm not seeing off the top of my head.

I can't be the first person to come up with this strategy, nor try it in real interviews. Has anybody attempted this? Was it successful or not, and why?

I can't tell if this strategy that just popped into my head is of value :-).


what if we just interviewed people.


Why is this not an interview? I had very similar problems even on an interview to get in a math-oriented high school.


It's a problem because it's not deterministic. You need some kind of determinism when comparing 20 people a month and need to justify your position, especially if you're the only one saying no. The biggest problem is that are you, the interviewer, having a good day or a bad day? In your scenario, your performance is necessarily coupled in with the candidate, which is exactly what you don't want, for this goal of determinism.

I think a modification of this works well, and it's what I do:

1. Come up with a problem that's loosely contextually related to the work, and open ended. Leaving it open ended will test their communication and "problem navigation". The contextual relation allows you to, at the end of the interview, explain how it's contextual related to the work. This helps them understand their own performance, rather than leaving them feeling tricked with random puzzles.

2. If the problem is too out of context for the candidate's experience (which is often fine) provide a path that teaches them the background they would need. Makes this "training" path available and known to everyone. Make sure it doesn't take too much time. This helps the determinism since it doesn't rely on luck of some specific knowledge. Also, someone that communicates they want to go this path is the better candidate, regardless of experience, since information seeking is the best attribute of a colleague.

3. Explain that it's purposefully meant to be an open ended, so they should feel free to ask questions, and it's ok if they get stuck. This allows you to collaborate in a way that you can judge against others. Also, it reduces the adrenaline, which is important, because you can easily loose good candidates who "freeze up". Related, maintain positivity. If you seem annoyed, their performance can irrationally plummet. Personally, I'll do a second round if I see someone freeze up, especially if they haven't done many interviews, because I don't think interviewing for the skill of being interviewed is interesting, since I'm interviewing them to work with them.

4. With the above, the metric for their performance is all about communication, problem solving, and skill. The amount of explanation they needed to understand the fundamental problem, and how well they could fit it in their head, the amount of help they needed to implement it, the amount of mistakes they made, and how they reached out for help can be used to justify your yes or no, in a predictable way, that can be compared against others.

Across about 30 people, my observation of their actual performance, compared to my prediction from the interview performance, has been pretty darn close, with only a few outliers. The outliers were those that were either too familiar, or not at all familiar, with the problem space. The too familiar people appear to have better performance during the interview, since they're working from rote. The out of context people have the burden of not having the relationally-compressed version of the knowledge in their head, so run out of working memory. It's very easy, and somewhat fascinating, to see the moment when someone runs out. But, this is why the problem should be loosely related to the work: it's a valid criticism that "they're going to take significant time to ramp!".

I apologize for the wall, but this is something I'm interested in, and would love to see how others handle it. I think the Jim Keller way is the best, but it requires more than 45 minutes, and a certain level of seniority on the candidates side.


Seems like there are going to be non-search algorithmic solutions that solve these non-optimally, then god's solution where each split or combine works towards putting more than one number in place.


I hate little puzzles like this when presented in isolation, my brain just bounces off them.

If the EXACT same puzzle was presented as part of a real problem with context and reasons, my brain would be all over it and I'd work out a solution.

As another comment says, it will inevitably show up as a coding interview, one that I would likely fail!


I agree, but I enjoyed doing Google's semi-secret random-invite-only "Foobar" challenge, which gives you a series of coding problems with a silly story attached.

I think even that very minimal structure helps mentally elevate it from a boring math problem to a "real" scenario.


The broader contexts could be this: look at this puzzle as a way for reinvent sorting as a delegation of operations (split + combine) instead of traditional "swapping" of values. As CPU arithmetic is faster then memory operations some real treasure may be hidden in this new approach.


That is clearly nonsense, as the results of performing the arithmetic still need to be written back to memory, so it's just a swap with extra operations. See also the xor-swap.


Yeah, I agree

I tried doing leetcode for a while but basically just get bored of it

My brain is wired to solve problems and puzzles, but not in isolation. Removed of context I just can't convince myself there's any value in it and I bounce off just as you describe


Yep, the best solution to a puzzle is to just not do it as far as my brain is concerned!


I came here to post that I think this game is an evil ddos attack.


Assuming negative numbers are not allowed, the how about:

[3, 2]

Creating a zero is useless, you can’t split 3 or 2, and you can’t make a 5.

[2, 1] is similarly stuck.


(not my post)


If you hadn't mentioned it I would have thought this was a self-post…


So it's a self post, but not a self-post. Sorry, I'll see my-self out.


So like, infinity, infinity-1, infinity-2, etc.?


the best I can find for [7, 5, 3] is:

7 5 3

7 1 4 3

2 5 1 4 3

2 5 1 7

2 6 7

2 1 5 7

3 5 7


What I found:

753 7512 762 7152 34152 3417 357

That's the same as your solution but in reverse.

It's not possible in 4 steps or less, because if you ignore the no-duplicates-rule, there is only 1 possibility and that one has duplicates. It's not possible in 5 steps, because you would not end up with an odd length list again. Solutions will always have an even number of steps. So six must be shortest.


I came to the exact same solution.

My feel for this type of puzzle is that there is a 'gravity' from the higher to lower value integers. So you want to help integers flow from the 7 to the 3. The state of the list then represents a sieve that dynamically restricts the flow paths from one step to the next. So at any time step your possible paths to flow the integers from 7 to 3 are quite restricted.

The first step of 753 -> 7143 may seem arbitrary at first, but you quickly realise that most other options result in long awkward paths where you move integers back and forth, or deadends.

For example, if you decide to split the 7 first your valid moves are 753 -> 6153 or 753 -> 1653. The first move still leaves you overloaded at the left most position, and you still need another split because you cant combine 1+5 or 5+3 due to duplicates or exceeding 7. So you don't really feel closer. Same with 1653, putting you in a position where all combinations exceed 7, and you need to further breakdown numbers, but you've already used up all your valid odd numbers, so you have to break 6 into 2 and 4 -> 12453. This is a dead end.

Fun morning coffee puzzle.


The following should be all the smallest solutions for [7, 5, 3]:

753 7512 34512 3462 34152 3417 357

753 7512 762 7152 34152 3417 357

753 7512 762 3462 34152 3417 357

753 7143 25143 2643 21543 2157 357

753 7143 25143 2643 267 2157 357

753 7143 25143 2517 267 2157 357

Discovered using SAT/SMT


Another 7-step version:

7 5 3

7 1 4 3

2 5 1 4 3

2 6 4 3

2 6 7

2 1 5 7

3 5 7


A dynamic programming approach would work. Basically, try everything.


What do you mean by that exactly and why would it be 'dynamic programming' ?


"Dynamic programming" is jargon from competitive programming. It sounds a lot more sophisticated than it is.

In reality, it is nothing more than memoization of function calls into an array.


> In reality, it is nothing more than memoization of function calls into an array.

No, it's the next step after memoization.

Recursion is the slowest approach of implementing many algorithms, as it will duplicate a lot of computation of subproblems. It solves a lot of subproblems many times.

Memoization is a cheap fix, it will not solve subproblems multiple times. It stores subproblems it has already solved, along with the solution. But it has an increasingly large state, keeping solutions of subproblems in memory which are actually no longer needed.

Dynamic programming is a manipulation on algorithms relying on memoization. It takes such an algorithm, and it makes the state as small as possible. So solutions of subproblems are discarded when they are no longer needed.

Like in memoization, dynamic programming will keep around previous subproblem solutions. But it will only keep the ones around it still needs in the future, and discard what is no longer needed. This often requires a change in representation of that state and in the order in which the subproblems are solved. But it will run much faster than recursion, on less memory than memoized approaches.


Keeping the intermediate results of computations is just called programming. They go into a variable or a data structure like a vector or a hash map.


That tends to blow up your memory. And even if you have huge amounts of memory available, using a big cache will slow down your program enormously.


You: Dynamic programming means saving the results of computations

Me: That's just normal programming, there is no reason to use a term that someone made up in the 60s to as intentionally nonsensical.

You: That will blow up your memory and slow down your program!

What are you talking about here? All I said was that what you described was normal programming. I didn't come up with anything different, we're still talking about whatever you wrote. Somehow now storing data, something that happens in every program ever made, 'blows up memory'.


But there is a difference between memoization and dynamic programming. Memoization or otherwise indiscriminately storing the results of computations will blow up your memory use on many problems, where a dynamic programming approach would not.

Dynamic programming does not mean saving the results of computations, it is about saving the _minimal_ number of results, without having to recompute already solved subproblems later on.

I agree that the name chosen is nonsensical, but the concept is very much a real thing. And a quite important one at that.


Your previous comment: dynamic programming will keep around previous subproblem solutions

Your comment here: Dynamic programming does not mean saving the results of computations

indiscriminately storing the results of computations

This is not something that happens. No is out there storing a bunch of stuff they don't need on purpose, running out of memory, then calling it a day. That's like saying "some people walk straight into walls, but in dynamic walking we go around walls!".

it is about saving the _minimal_ number of results, without having to recompute already solved subproblems later on.

Again, this is just what programming is some times. You save results instead of recomputing stuff.


> No is out there storing a bunch of stuff they don't need on purpose, running out of memory, then calling it a day.

There are a lot of people (such as my students) that would take a DP problem, solve with recursion, throw some caching [0] in and call it a day. They are often surprised by the 1000x speedups using a DP algorithm instead, no caching needed.

[0] https://docs.python.org/3/library/functools.html#functools.c...

> Again, this is just what programming is some times. You save results instead of recomputing stuff.

I'm sorry, but I still don't get the argument you are making. That DP is part of programming? Well, yes?


That DP is part of programming?

Dynamic programming was a made up nonsense term from the 50s to be opaque and vague so they wouldn't be bothered. It doesn't mean anything, but people try to make up some backwards rationalization.

Computing a factorial by recursively computing all the other previous factorials if you have any of them already done is an insane way to work. Doing something straightforward and sane like saving some results is like walking on broken glass then calling putting something over your feet "shoe walking". There is no special fancy label for doing the simple sane version of something.


Yep, it has a nonsensical name, that's the point I wanted to convey.

Actual "dynamic programming" would be self-modifying code, or something just as sophisticated, IMO. Or Prolog =)

What about "memoization optimized backtracking"?


I'd propose "Minimal memory optimal subproblem iteration"? Or "minimem iteration" for the friends?

I don't like mentioning memoization in DP's name, as I have had a lot of problems explaining students how you go beyond the memoization approach.


Memoization is just caching so I'm not sure how that would make any difference here.


Different sets of moves can lead to the same state, so if you can cache that a given state will not lead to a solution you can stop exploring that branch when you encounter it again.


Is that 'dynamic programming' or is that just keeping a set of explored states?


Could a constraint solver just rip through this problem?


I encoded it in SMT (source code https://github.com/rdaly525/hnpuzzle). Fun little challenge!

Turns out there are 6 unique solutions for [7,5,3] in 6 steps


It was interesting to see GPT4 fail at it: https://chat.openai.com/share/02c12bbe-43cd-40da-b5df-33681d...

Not a bad benchmark problem. It didn't get very far, but maybe the next release will.


ChatGPT can't write programs or do logic to solve problems whole solutions aren't already in its input.


It applied logical reasoning, just not quite enough of it. When it gets 10x better, which it will, it will be smarter than either of us.

For those who don't find that prospect fascinating, other sites beckon.


deosjr just posted a proposed Prolog solution in ~50 LOC

https://news.ycombinator.com/item?id=40014035


The number of variables is not constant in each step so modeling this would be trickier than usual. I feel like there is probably an more efficient DP approach for this though.


.... (reverse 'list)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: