Lol, they got me too. Mine has an algorithm to find a solution quickly, and another to find the shortest solution possible. I also created a very rudimentary GUI so that the game could be played. I'll clean the GUI up later though.
Technically there are cases in which you can. The repeated values will act like separators that you cannot act on (because once you act on one, as you said one cannot create the final list again), but then it becomes multiple instances of the puzzle which seems pointless
Interesting, it appears I've encountered an impossible combination of numbers: [2, 5, 1, 3]
The only possible move is to add 1 and 3. Can't ever split a 2 as that would result in [1, 1], can't split the 5 because that would be [4, 1] or [3, 2], can't split the 3, as that would be [2, 1] or [1, 2], can't add the 5 to anything as that would be larger than 5.
Yeah the generation is very simple, 2-4 random numbers between 1 and 9 inclusive. Definitely gives impossible puzzles sometimes (because I don't know a general algorithm for generating possible ones only).
If you only have numbers up to 9 an exhaustive search shouldn't be too bad (should be ~9! states if you memoize), you could implement a solver in Javascript and see whether a solution can be found before you give it to the user.
Rules say you can’t ever produce a number greater than the original largest number, so the possibilities will always be finite. (fixed number of ways to make a list of distinct integers that sum to N such that all values are <=M).
Fun implementation! There's currently a bug that allows the same number to appear twice after a combination step (I was able to perform 2,3,7,5->5,7,5)
Though I say the rule about repeated numbers is a bummer. I can see things might get easier (also it might be a more procedurally easy problem to solve) if this is allowed
Basically I don't think it's a "productive" complication
Without that rule, you could decompose everything into 1's, and then recompose from there. Not the most efficient solution, but a boring one that always exists.
Would be nice if you allowed users to define their own puzzle. I had to set a break point at the right ___location and call startNew([..my array]) to do so.
This can be transformed into a problem with pegs and moving blocks.
Like tower of hanoi[1], but you can add or remove empty pegs, blocks are the same size and can be stacked in any order, you can move as many blocks as you want and you cannot have towers with the same amount of blocks.
Unless you want to deal with negative integers, then it would get more tricky.
The charm of towers of Hanoi is that you can make a physical version where the legality of moves is easily verified because of the differently sized blocks.
I think the “you cannot have towers with the same amount of blocks” rule will make this a lot less charming, certainly if any of the numbers are larger than, say, 10.
There also is the issue of adding pegs, but that’s solvable by fixing the number of stacks (the triangular number for n is about ½n², so it certainly need not be larger than twice the square root of the number of blocks).
You can even make it smaller to get a variation on this game where the length of the list is limited.
You can make a physical version by having one fixed height pole for each integer up to the max, and then an equal number of post holes. Eg. If you wanted to support a game up to the integer 10 you would need 10 posts with lengths from 1 to 10 units, and 10 post holes.
The biggest issue with this game is that there's no guarantee that any arbitrary starting state has a valid solution. A much needed improved would be a simple rule for guaranteed reversible starting formations (if one even exists).
> If you wanted to support a game up to the integer 10 you would need 10 posts with lengths from 1 to 10 units, and 10 post holes.
You’d also need a way to represent the order of the posts (the game is about a list of integers, not a set, so you can’t move from [4,5,6] to [10,5], for example)
I think a halfway decent visualization is one where you have n different cylinders of lengths 1 through n and a gutter of length of the sum of the numbers you start with (in the [4,5,6] example that would be 15). Next, place the cylinders for the starting position in the gutter in the order given, so that it completely fills it. Keep the others elsewhere.
Allowed moves then are:
- replace two cylinders that are side by side in the gutter by one of the sum of their lengths that you have available.
- replace a cylinder in the gutter by two available cylinders that together have the same length.
I think this way to visualize the game also might lead to a physical construction, but I don’t see on yet.
Thinking it through a bit more, I would consider a variation on this game, where you don’t have a list but a ring buffer.
Then, you can replace the linear gutter by a circular one and replace the cylinders by parts of a torus. That would make for a cooler look of the game (on the other hand: how would you easily see you’ve completed the puzzle?)
Unfortunately, you won’t be able to put the ‘spare’ parts in a concentric circle as that would have a different radius.
Was just thinking this. There’s some differences, like the integers don’t have to be ordered biggest to smallest at any time, but it very much feels like Towers of Hanoi would be a good starting place to solve this.
In Towers of Hanoi, you're only allowed to pick up one disc at a time, so it's not completely trivial. It's simply operation intensive... kinda like Reverse the List of Integers
Of course I'm suggesting picking up multiple discs at a time. That's the whole idea.
Compare thih9's comment:
> Like tower of hanoi[1], but you can add or remove empty pegs, blocks are the same size and can be stacked in any order, you can move as many blocks as you want and you cannot have towers with the same amount of blocks.
All my comment did was to point out that this description doesn't work, because the rules here are not similar to the rules of Towers of Hanoi.
Under the rules of the original comment, here's how you reverse the list [7, 5, 3]:
+++++++ +++++ +++
+++(----) +++++ +++(++++)
It's a simple, one-step process, and this will be true for any list of three integers. A list of four or five will take two steps, a list of six or seven will take three, etc. In all cases, reversing the list is completely trivial, because thih9 introduced a rule, allowing you to simply swap two numbers, that isn't present in the original ruleset.
[3, 2, 1] has no valid modifications by the rules given.
I wonder how little wiggle-room there needs to be for a solution to be possible? Does [4, 2, 1] have a solution? You can get to with [4, -1, 3, 1] with one step, but I'm not sure where you can go from there.
To be pedantic, this isn't the operation he gave - "split an integer into 2 smaller integers" is the operation, but 2 into [3, -1] is splitting it into a larger and smaller integer.
Read literally, the numbers are constrained to not jump over the 0. It might depend on interpretation (absolute value or distance from positive infinity?); but negatives are tricky to handle with these rules and I think they might be illegal.
> You can get to with [4, -1, 3, 1] with one step, but I'm not sure where you can go from there.
Well, it's easy to show that you can't go anywhere from there:
1. You can't combine the 1 with the 3, because you'd have two 4s.
2. You can't combine the -1 with the 4, because you'd have two 3s.
3. You can't (usefully) combine the -1 with the 3, because that just means you never took your previous move of splitting that pair out of the 2.
Those are all the possible applications of the combining rule. You can't apply the splitting rule either:
1. Splitting the four into positive integers will cause a duplicate.
2. Splitting a negative number out of the four will violate the rule against numbers larger than 4.
3. Splitting the 3 has the same problems. You can't take out a negative number without leaving a residual of 4 or more, and you can't take out a positive number without leaving a residual of 1.
4. Taking a -1, -2, or -3 out of the 1 will leave a duplicate, and taking a positive value out of the 1 is impossible.
5. You can split the -1 into [-3, 2]. But at that point you need to get rid of the 1 in final position so that you can combine the -3 into the 4, and there isn't a way to do that. Once you've gotten here, your list already contains every legal positive value.
Huh. There's bound to be some weird constraint there.
E: Looking at the sibling comments, I'm disregarding negative numbers here. Probably that breaks the line of thought here.
For example, a sequence made up of all numbers from 1 to n is unsolvable. A sequence of [1, 2, ..., n-1, n+1] for n >= 3 also seems to be unsolvable, since no number up to n-1 can split into a smaller number (they are all in the list) and n+1 also cannot split, because 1 is in there.
So it seems that there would be an unsolvable class of lists along the lines of [1, 2, ..., k, ..., n-1, n+k] by that argument, because the n+k cannot split ever, because all numbers up to k are in that list.
And I guess for any unsolvable list with a max value of M, you can add any number below M into the unsolvable list, because that just removes moves.
The article says you start with a list of positive integers.
It also clearly shows right in the first example of the first rule that you can go smaller than the smallest initial value: [7, 5, 3] -> [6, 1, 5, 3] is the first legal move it shows, and you'll notice that 1 is smaller than 7, 5, or 3.
* You can never make an integer greater than the largest integer in the original list.
* You can never make a move that results in the same integer appearing in the list more than once.
You can make a negative number and you can go lower than the smallest but not greater than the largest, so you're wrong on both counts.
However, as a result of #1, you in fact cannot make negative integers. It’s not possible to split a nonnegative integer into two lesser-or-equal integers such that either of them is negative.
I believe Sharlin is correct here. Rule #1 states `Split an integer into two smaller integers.` 1 -> [2, -1] breaks this rule due to 2 not being smaller than 1.
Any combination where all consecutive numbers are present is unsolvable. That is [1..n], in any order.
That's because it is impossible to split any number, because it will create a duplicate, it is also impossible to merge, because it will either create a duplicate or a number >n
[2..n] (in any order) is also unsolvable for the same reason. And certainly many others ex:[1,2,4], finding if a problem is solvable is interesting in itself. A good solver should nor only find the shortest solution(s) if there is one but also determine if it is solvable. An interesting problem in itself.
It seems that if the numbers are put in ascending order as min-max, then there would have to be at least two gaps in the sequence to make meaningful progress?
Yeah, the problem is slightly underspecified, though it’s pretty obvious that you’re not allowed to do swaps as that obviously makes the problem trivial.
I mean that if you allow swapping then every instance of the problem becomes trivial because swapping is all that reversing is. Doesn’t mean the problem as intended is always solvable.
Neat little problem! You could do SAT-solvers and stuff, but it's also obviously a graph search problem: each state is a vertex, each edge is an operation that leads to another vertex. So, obviously: Dijkstra works (or really just BFS, since the edges are unweighted), but not the fastest in the world.
The fun one to play with would be A-star, but you'd have to find a suitable heuristic. One obvious candidate is that if you have target list that is X long and a current list that is Y long, then you need at least abs(Y - X) steps to get there (since each step either adds or removes a number). That's admissable and would probably speed up the search quite a bit compared to regular BFS, but you could probably do a lot better. I'm thinking a heuristic based on the number of inversions or something.
I would suspect if you found a really good heuristic for A-star, that's about as good as you're going to get. Though maybe there's something more clever I'm not spotting.
Start with a list of positive integers, (e.g. [7, 5, 3]) and your goal is to make the same list, in reverse ([3, 5, 7]).
Operations:
1) Split an integer into two smaller integers. (e.g. [7, 5, 3] → [6, 1, 5, 3])
2) Combine (add) two integers into a larger one. (e.g. reverse the last e.g.)
Restrictions:
1) You can never make an integer greater than the largest integer in the original list.
2) You can never make a move that results in the same integer appearing in the list more than once.
Here's my Prolog code which solves (small inputs) of the puzzle, taking the core grammar of moves//3 from the Water Jug solution in Markus Triska's Power of Prolog video "Sparrows on Eagles: Delegate your work to Prolog!"[1] (I couldn't have written that style myself from scratch):
:- use_module(library(dif)). % Sound inequality
:- table split/2, merge/3, moves//3.
% Replace one number H from a list with
% two A and B which sum to that number.
split([], []).
split([H|T], [A,B|T]) :-
between(1,H, A),
between(1,H, B),
dif(A,B),
H is A+B.
split([H|T], [H|T2]) :-
split(T, T2).
% Merge two adjacent numbers A and B from a list by
% summing into H, unless that would exceed list Max.
merge([], _, []).
merge([H|T], Max, [H|T2]) :-
merge(T, Max, T2).
merge([A,B|T], Max, [H|T]) :-
H is A + B,
H =< Max.
% Describes moves from starting state S0
% to Target state, by splitting, merging,
% and checking for duplicates using sort.
moves(S0, _, Target) --> { member(Target, S0) }.
moves(S0, Max, Target) --> [Ns],
{ select(Ns0, S0, S),
(split(Ns0, Ns)
; merge(Ns0, Max, Ns)),
sort(Ns, NsSet), same_length(Ns, NsSet) },
moves([Ns|S], Max, Target).
solve(S0, [S0|Moves]) :-
max_list(S0, Max),
reverse(S0, Target),
phrase(moves([S0], Max, Target), Moves).
It will run in https://swish.swi-prolog.org/ clicking the 'Empty' then 'Program' and pasting the code in, querying (without the ?- and trailing .) in the lower right.
Taking the technique from the video; the query uses length/2 to lengthen the list of answer moves one at a time before the answer is sought, this code does iterative deepening and will find the shortest sequence of moves first.
I like it, but it strikes me that there are actually not that many valid moves you can make in the example problem. Maybe this isn't true in other examples, but I found most of my time thinking about this was simply realizing that most of my desired moves were off limits. Perhaps that's the point, it just struck me.
To solve a particular position, I just use level-by-level breadth-first search until a level contains two values that are reverses of each other.
To explore the entire state space of possible initial positions, I use a number of tricks; I'll be writing that up pretty soon. I've explored through n=12 already, and expect to finish n=13 and n=14 pretty soon. I'm not sure if I'll be able to do n=15.
And by the way, I've found a position for n=14 that requires 206 moves to solve.
Squinting at this... Feels like there is some relationship to asking if you can tile a triangle into integer-length non isoceles triangles.
It feels like it is typically going to be impossible. Most puzzles will seem to have only a few legal moves at a time, all of which will either loop back to wehre they are or lead to victory.Generally speaking though,odd numbers are useful.
My first reaction to this was, "neat coding question for interviews". I started to try to solve it for a few seconds, then thought of when I've asked coding questions. Invariably starting by saying "The goal is to see how you approach and work through the problem, and less so whether you come up with the optimal solution". Then I thought, what if the interviewer and interviewee tackled the problem together, both coming in cold? And that was stated as such, up front? It would be much closer to the real world of tacking novel problems as a team. Taking the pressure of "I need to come up with the answer" off the interviewee should result in real focus on "working through the problem", and decrease interviewee stress; if the interviewer doesn't know the answer yet, how can they expect the interviewee to come up with one?
There are a couple of obvious downsides like:
What if it turns out the problem is too trivial? Move on to the next one.
What if the interviewee has already encountered the problem before? No different than if the interviewer posed an already-solved problem.
It would be incumbent upon the interviewer to not reveal a solution if they come up with it first of course. And the evaluation of "how you approach and work through the problem" is less definite than whether an answer (brute-force or optimal) was achieved; but if approach is indeed the important thing, that needs to be evaluated regardless. I'm sure there are other downsides I'm not seeing off the top of my head.
I can't be the first person to come up with this strategy, nor try it in real interviews. Has anybody attempted this? Was it successful or not, and why?
I can't tell if this strategy that just popped into my head is of value :-).
It's a problem because it's not deterministic. You need some kind of determinism when comparing 20 people a month and need to justify your position, especially if you're the only one saying no. The biggest problem is that are you, the interviewer, having a good day or a bad day? In your scenario, your performance is necessarily coupled in with the candidate, which is exactly what you don't want, for this goal of determinism.
I think a modification of this works well, and it's what I do:
1. Come up with a problem that's loosely contextually related to the work, and open ended. Leaving it open ended will test their communication and "problem navigation". The contextual relation allows you to, at the end of the interview, explain how it's contextual related to the work. This helps them understand their own performance, rather than leaving them feeling tricked with random puzzles.
2. If the problem is too out of context for the candidate's experience (which is often fine) provide a path that teaches them the background they would need. Makes this "training" path available and known to everyone. Make sure it doesn't take too much time. This helps the determinism since it doesn't rely on luck of some specific knowledge. Also, someone that communicates they want to go this path is the better candidate, regardless of experience, since information seeking is the best attribute of a colleague.
3. Explain that it's purposefully meant to be an open ended, so they should feel free to ask questions, and it's ok if they get stuck. This allows you to collaborate in a way that you can judge against others. Also, it reduces the adrenaline, which is important, because you can easily loose good candidates who "freeze up". Related, maintain positivity. If you seem annoyed, their performance can irrationally plummet. Personally, I'll do a second round if I see someone freeze up, especially if they haven't done many interviews, because I don't think interviewing for the skill of being interviewed is interesting, since I'm interviewing them to work with them.
4. With the above, the metric for their performance is all about communication, problem solving, and skill. The amount of explanation they needed to understand the fundamental problem, and how well they could fit it in their head, the amount of help they needed to implement it, the amount of mistakes they made, and how they reached out for help can be used to justify your yes or no, in a predictable way, that can be compared against others.
Across about 30 people, my observation of their actual performance, compared to my prediction from the interview performance, has been pretty darn close, with only a few outliers. The outliers were those that were either too familiar, or not at all familiar, with the problem space. The too familiar people appear to have better performance during the interview, since they're working from rote. The out of context people have the burden of not having the relationally-compressed version of the knowledge in their head, so run out of working memory. It's very easy, and somewhat fascinating, to see the moment when someone runs out. But, this is why the problem should be loosely related to the work: it's a valid criticism that "they're going to take significant time to ramp!".
I apologize for the wall, but this is something I'm interested in, and would love to see how others handle it. I think the Jim Keller way is the best, but it requires more than 45 minutes, and a certain level of seniority on the candidates side.
Seems like there are going to be non-search algorithmic solutions that solve these non-optimally, then god's solution where each split or combine works towards putting more than one number in place.
I agree, but I enjoyed doing Google's semi-secret random-invite-only "Foobar" challenge, which gives you a series of coding problems with a silly story attached.
I think even that very minimal structure helps mentally elevate it from a boring math problem to a "real" scenario.
The broader contexts could be this: look at this puzzle as a way for reinvent sorting as a delegation of operations (split + combine) instead of traditional "swapping" of values. As CPU arithmetic is faster then memory operations some real treasure may be hidden in this new approach.
That is clearly nonsense, as the results of performing the arithmetic still need to be written back to memory, so it's just a swap with extra operations. See also the xor-swap.
I tried doing leetcode for a while but basically just get bored of it
My brain is wired to solve problems and puzzles, but not in isolation. Removed of context I just can't convince myself there's any value in it and I bounce off just as you describe
It's not possible in 4 steps or less, because if you ignore the no-duplicates-rule, there is only 1 possibility and that one has duplicates. It's not possible in 5 steps, because you would not end up with an odd length list again. Solutions will always have an even number of steps. So six must be shortest.
My feel for this type of puzzle is that there is a 'gravity' from the higher to lower value integers. So you want to help integers flow from the 7 to the 3. The state of the list then represents a sieve that dynamically restricts the flow paths from one step to the next. So at any time step your possible paths to flow the integers from 7 to 3 are quite restricted.
The first step of 753 -> 7143 may seem arbitrary at first, but you quickly realise that most other options result in long awkward paths where you move integers back and forth, or deadends.
For example, if you decide to split the 7 first your valid moves are 753 -> 6153 or 753 -> 1653. The first move still leaves you overloaded at the left most position, and you still need another split because you cant combine 1+5 or 5+3 due to duplicates or exceeding 7. So you don't really feel closer. Same with 1653, putting you in a position where all combinations exceed 7, and you need to further breakdown numbers, but you've already used up all your valid odd numbers, so you have to break 6 into 2 and 4 -> 12453. This is a dead end.
> In reality, it is nothing more than memoization of function calls into an array.
No, it's the next step after memoization.
Recursion is the slowest approach of implementing many algorithms, as it will duplicate a lot of computation of subproblems. It solves a lot of subproblems many times.
Memoization is a cheap fix, it will not solve subproblems multiple times. It stores subproblems it has already solved, along with the solution. But it has an increasingly large state, keeping solutions of subproblems in memory which are actually no longer needed.
Dynamic programming is a manipulation on algorithms relying on memoization. It takes such an algorithm, and it makes the state as small as possible. So solutions of subproblems are discarded when they are no longer needed.
Like in memoization, dynamic programming will keep around previous subproblem solutions. But it will only keep the ones around it still needs in the future, and discard what is no longer needed. This often requires a change in representation of that state and in the order in which the subproblems are solved. But it will run much faster than recursion, on less memory than memoized approaches.
You: Dynamic programming means saving the results of computations
Me: That's just normal programming, there is no reason to use a term that someone made up in the 60s to as intentionally nonsensical.
You: That will blow up your memory and slow down your program!
What are you talking about here? All I said was that what you described was normal programming. I didn't come up with anything different, we're still talking about whatever you wrote. Somehow now storing data, something that happens in every program ever made, 'blows up memory'.
But there is a difference between memoization and dynamic programming. Memoization or otherwise indiscriminately storing the results of computations will blow up your memory use on many problems, where a dynamic programming approach would not.
Dynamic programming does not mean saving the results of computations, it is about saving the _minimal_ number of results, without having to recompute already solved subproblems later on.
I agree that the name chosen is nonsensical, but the concept is very much a real thing. And a quite important one at that.
Your previous comment: dynamic programming will keep around previous subproblem solutions
Your comment here: Dynamic programming does not mean saving the results of computations
indiscriminately storing the results of computations
This is not something that happens. No is out there storing a bunch of stuff they don't need on purpose, running out of memory, then calling it a day. That's like saying "some people walk straight into walls, but in dynamic walking we go around walls!".
it is about saving the _minimal_ number of results, without having to recompute already solved subproblems later on.
Again, this is just what programming is some times. You save results instead of recomputing stuff.
> No is out there storing a bunch of stuff they don't need on purpose, running out of memory, then calling it a day.
There are a lot of people (such as my students) that would take a DP problem, solve with recursion, throw some caching [0] in and call it a day. They are often surprised by the 1000x speedups using a DP algorithm instead, no caching needed.
Dynamic programming was a made up nonsense term from the 50s to be opaque and vague so they wouldn't be bothered. It doesn't mean anything, but people try to make up some backwards rationalization.
Computing a factorial by recursively computing all the other previous factorials if you have any of them already done is an insane way to work. Doing something straightforward and sane like saving some results is like walking on broken glass then calling putting something over your feet "shoe walking". There is no special fancy label for doing the simple sane version of something.
Different sets of moves can lead to the same state, so if you can cache that a given state will not lead to a solution you can stop exploring that branch when you encounter it again.
The number of variables is not constant in each step so modeling this would be trickier than usual. I feel like there is probably an more efficient DP approach for this though.
Only positive integers are meant to be allowed. (Zero excluded.)
Combining is meant to work on adjacent pairs of integers.
If this is used in coding interviews, I deny any responsibility. Unless it's used in interviewing me, in which case I will totally take credit.