Ted Selker, who developed the Trackpoint at IBM Almaden Research Lab, always calls it the "Joy Button", but IBM just wouldn't go with that. But at least they made it red (thanks to industrial designer Richard Sapper)!
>Input devices should allow users to configure a velocity mapping curve (which could be negative to reverse the motion). X-Widows had a crude threshold-based mouse acceleration scheme, but it's better to have an arbitrary curve, like the TrackPoint uses, that can be optimized for the particular user, input device, screen size, and scrolling or pointing task at hand.
>One of the patented (but probably expired by now) aspects of the Trackpoint is that it has a very highly refined pressure=>cursor speed transfer function, that has a couple of plateaus in it that map a wide range of pressures to one slow or fast but constant speed. The slow speed notch is good for precise predictable fine positioning, and the fast speed notch is tuned to be just below eye tracking speed, so you don't lose sight of the cursor. But you can push even harder than the fast plateau to go above the eye tracking plateau and flick the cursor really fast if you want, or push even lighter than the slow plateau, for super fine positioning. (The TrackPoint sensor is outrageously more sensitive than it needs to be, so it can sense very soft touches, or even your breath blowing on it.)
Ted Selker demonstrated an early version of the "pointing stick" in 1991:
> it's better to have an arbitrary curve, like the TrackPoint uses, that can be optimized for the particular user
Maybe you could even learn the curve by counting pointing error (measure the overshoot before clicking), and maybe make it two dimensional (i.e. the sensitivity and acceleration are different based on the direction).
I use a program called BrightML to do automatic screen brightness control. It's innovative in that it accounts not only for time of day, ___location, and ALS if you have it; but also considers which application is focused when you set the brightness. If there is an application which you always turn down the brightness for when it's focused, it'll turn it down in anticipation.
A similar thing could probably be done for the TrackPoint acceleration.
That's a great idea, especially since the ideal profile would differ between different apps (or different parts of the same app, like drawing area -vs- toolbar -vs- text editor).
Ted wrote some amazing stuff about adaptive curves and other cool ideas he tried, at the end of his whirlwind exposition to Bill Buxton:
>The ones that got away are more poignant. I designed adaptive algorithms we were exploring- we were able to raise the tracking plateau by 50% for some people, and if we had had the go ahead we could have made that stable and increased performance tremendously. It seems some people used tendon flex to improve pointing- we found this looked like overshoot and we had trouble making the adaptive algorithms stable in the time box we gave it. We made a special application that was a game people played to improve their pointing –as their game play improved the transfer function improved too! We made a surgical tool that used a Trackpoint to allow tremor-free use of a camera from a laparoscope. We made a selector for the FAA to do ground traffic control that saved a multihundred million dollar contract for IBM with the government. I would have loved the product to change cursor movement approach for form filling, text editing and graphical applications; again that could probably double performance. Yes, I s till want to do it NOW. We built a gesture language into the Trackpoint that can be accessed in the firmware; the only aspect the driver exposes is Press to Magnify or Press to Select. We created probably two dozen haptic feedback Trackpoint designs. They improved novice performance and were loved by the special needs community. I did a preliminary study that showed how novices selected faster with it; the product group saw no need to spend the money for that. We made a science experimental test bed to teach physics that never shipped; we made many versions of multi-Trackpoint keyboards that never shipped. We made many other things too- a versatile pointing device for the table called Russian tea mouse allowed for full hand, thumb, finger, or in-the-palm use. We made pen like stalks that allowed selection without taking hands off the keyboard. We made devices that used one set of sensors to run two input devices. We made an electormechanical design used by one special user. We found that brushing th e top instead of pressing it could give amazing dynamic range, at the expense of having to cycle the finger for long selections. The joystick for this had no deadband, it had an exquisite sensitivity and control … we never made an inkeyboard device that shipped with this alternative set of algorithms and scenario. I designed better grippy top ideas that never made it; also better sensitivity solutions that never made it too. And I hate to say it but there are many other improvements that I made or would like to make that I could elaborate further on but will stop here…..
Vis-à-vis mamelons, Ted actually build a prototype Thinkpad keyboard with TWO trackpoints, which he loved to show at his New Paradigms for Using Computers workshop at IBM Almaden Research Lab.
While I'm not sure if this video of the 1995 New Paradigms for Using Computers workshop actually shows a dual-nippled Thinkpad, it does include a great talk by Doug Engelbart, and quite a few other interesting people!
The multi-Trackpoint keyboard was extremely approachable and attractive, and everybody who saw them instantly wanted to get their hands on them and try them out! (But you had to keep them away from babies.) He made a lot of different prototypes over time, but unfortunately IBM never shipped a Thinkpad with two nipples.
That was because OS/2 (and every other contemporary operating system and window system and application) had no idea how to handle two cursors at the same time, so it would have required rewriting all the applications and gui toolkits and window systems from the ground up to support dual trackpoints.
The failure to inherently support multiple cursors by default was one of Doug Engelbart's major disappointments about mainstream non-collaborative user interfaces, because collaboration was the whole point of NLS/Augment, so multiple cursors weren't a feature so much as a symptom.
Bret Victor discussed it in a few words on Doug Engelbart that he wrote on the day of his death:
>Say you bring up his 1968 demo on YouTube and watch a bit. At one point, the face of a remote collaborator, Bill Paxton, appears on screen, and Engelbart and Paxton have a conversation.
>"Ah!", you say. "That's like Skype!"
>Then, Engelbart and Paxton start simultaneously working with the document on the screen.
>"Ah!", you say. "That's like screen sharing!"
>No. It is not like screen sharing at all.
>If you look closer, you'll notice that there are two individual mouse pointers. Engelbart and Paxton are each controlling their own pointer.
>"Okay," you say, "so they have separate mouse pointers, and when we screen share today, we have to fight over a single pointer. That's a trivial detail; it's still basically the same thing."
>No. It is not the same thing. At all. It misses the intent of the design, and for a research system, the intent matters most.
>Engelbart's vision, from the beginning, was collaborative. His vision was people working together in a shared intellectual space. His entire system was designed around that intent.
>From that perspective, separate pointers weren't a feature so much as a symptom. It was the only design that could have made any sense. It just fell out. The collaborators both have to point at information on the screen, in the same way that they would both point at information on a chalkboard. Obviously they need their own pointers.
>Likewise, for every aspect of Engelbart's system. The entire system was designed around a clear intent.
>Our screen sharing, on the other hand, is a bolted-on hack that doesn't alter the single-user design of our present computers. Our computers are fundamentally designed with a single-user assumption through-and-through, and simply mirroring a display remotely doesn't magically transform them into collaborative environments.
>If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. Engelbart hated our present-day systems.
For what it's worth, supporting multiple cursors on any 90s or later system, save for dragging/selection, is prettymuch trivial. If they're just pointing and clicking, then you can use pointer warping and overlays/sprites to create the effect.
You can directly support multiple mice and other input devices by talking to the USB driver directly instead of using the single system cursor and mouse event input queue, and drawing your own cursors manually. (And DirectX / Direct Input lets you do stuff like that on Windows.) But you have to forsake all the high level stuff that assumes only one cursor, like the user interface toolkit. That is what Bret Victor meant by "bolted-on hack that doesn't alter the single-user design of our present computers".
That's ok for games that implement their own toolkit, or use one you can easily bolt on and hack, but not for anything that needs to depend on normal built-in system widgets like text editors, drop-down menus, etc. (And if you roll your own text fields instead of using the system ones, you end up having to reimplement all kinds input methods, copy/paste/drag/drop, and internationalization stuff, which is super-tricky, or just do without it.)
I implemented multi-player cursors in SimCityNet for X11 (which opened multiple X11 displays at once), but I had to fix some bugs in TCL/Tk for tracking menus and buttons, and implemented my own pie menus that could handle being popped up and tracking on multiple screens at once. TCL menus and buttons were originally using global variables for tracking and highlighting that worked fine on one X11 display at once, but that caused conflicts when more than one user was using a menu or button at the same time, so it needed to store all tracking data in per-display maps. It doesn't make sense to have a global "currently highlighted button" or a "current menu item" since two different ones might be highlighted or selected at once by different people's cursors.
And I implemented special multi-user "voting" buttons that required unanimous consent to trigger (for causing disasters, changing the tax rate, etc).
I had to implement multi-user cursors in "software" by drawing them on the map manually, instead of using X11 cursors.
I wrote about that earlier in a discussion about Wayland and X-Windows:
And it's in the direction of multi-user collaboration that X-Windows falls woefully short. Just to take the first step, it would have to support separate multi-user cursors and multiple keyboards and other input devices, which is antithetical to its singleminded "input focus" pointer event driven model. Most X toolkits and applications will break or behave erratically when faced with multiple streams of input events from different users.
For the multi-player X11/TCL/Tk version of SimCity, I had to fix bugs in TCL/Tk to support multiple users, add another layer of abstraction to support multi-user tracking, and emulate the multi-user features like separate cursors in "software".
Although the feature wasn't widely used at the time, TCL/Tk supported opening connections to multiple X11 servers at once. But since it was using global variables for tracking pop-up menus and widget tracking state, it never expected two menus to be popped up at once or two people dragging a slider or scrolling a window at once, so it would glitch and crash whenever that happened. All the tracking code (and some of the colormap related code) assumed there was only one X11 server connected.
So I had to rewrite all the menu and dialog tracking code to explicitly and carefully handle the case of multiple users interacting at once, and refactor the window creation and event handling code so everything's name was parameterized by the user's screen id (that's how you fake data structures in TCL and make pointers back and forth between windows, by using clever naming schemes for global variables and strings), and implement separate multi-user cursors in "software" by drawing them over the map.
That's right. Implementing multiple cursors in X-Windows, Windows or MacOS is anything but "prettymuch trivial". You basically have to re-implement parts of the window system and user interface toolkit yourself, and throw away or replace all the (extremely non-trivial) services and features of all the code you're replacing (including extremely tricky bits like input methods, internationalization, drag and drop, cut and paste, etc).
Slight topic derail, I recently handed a thinkpad to a person who had grown up using tablets and laptops with "modern" touchpad pointing devices.
It's my older thinkpad I keep around to ssh into things. They were completely and utterly surprised by the presence of the trackpoint and found it to be extremely unusual.
At least for me, TrackPoint is one of those things that once it "clicks" is extremely hard to give up. Working on a laptop without a TrackPoint always feels clumsy and inelegant, even with all the improvements trackpads have seen. There's just something about the combination of a mechanical pointer and where it's placed relative to the keyboard that feels right.
All IBM's original patents on TrackPoint must surely have expired by now, so it always surprised me that we didn't see other OEMs start offering it as an option. There are a couple that have, but only on a few models, and by all accounts non-IBM/Lenovo TrackPoint implementations aren't quite as good for some reason. Disappointing!
I would gladly accept the job of reeling in the golden age of the TrackPoint at Dell; and I don't think it would be tremendously difficult, especially now that any patents on the greatest untapped innovations would by now have expired (sorry, inventors!).
I might have stuck with my Kensington Slimblade, but I begun noticing the latency and the polling jitter (since, like most trackballs, it only runs at 125Hz) after switching to a 165Hz display, so I couldn't keep it.
If I had time to add 1000Hz polling to the thing, I'd probably still be using it.
> by all accounts non-IBM/Lenovo TrackPoint implementations aren't quite as good for some reason
Yeah. After many years with old X- and T-series IBM/Lenovo Thinkpads i got a Dell Latitude recently. It's pretty decent but the trackpoint (they call it "pointing stick") sucks in comparison. Also it's not red.
I thought about using at least the IBM dome on it, but they aren't compatible – IBM/Lenovo dome is higher and the square hole is bigger: https://i.vgy.me/u1SUKm.jpg
I used an IBM Thinkpad from 2003-2008 or so, but never got used to the trackpoint despite several attempts. Was always wildly inaccurate with it. The trackpad, though a god-awful rage-generation device, was still much more usable.