Been thinking about VR workspaces since sometime in the '80's. Broadly, they suck. It seems so cool but in practice VR adds pointless overhead to efficient UI. Windows with affine transformations suck at their one job. Very few people use Second Life as an IDE.
The big win, as far as I can tell, would be to engage the user's spacial memory. (There is a small but non-zero niche for 3D visualization of complex systems: weather, large molecule, etc.) You're going to want to combine "memory palace" with "zooming" UI in a kind of pseudo-3D (I think of it as 2.618...D but the exact fractal dimension isn't important I don't think.) Then infuse with Brenda Laurel's "Computers as Theatre"...
They might suck right now, but this is a relatively nascent application of the technology.
Wait until resolution improves and we break out of the "desktop" paradigm. We could have a collection of unlimited windows and tabs that exist in a continuum around us, and we could use gestures to organize and surface the contextually relevant ones.
We won't need a bulky multi-monitor setup, and we could work remotely nearly anywhere. Imagine carrying your workspace with you.
> The big win, as far as I can tell, would be to engage the user's spacial memory.
Absolutely! Physical workspaces and work benches are incredibly functional because we are spatial animals. Breaking out of the limitations of using a screen could unlock more of our senses for use in problem solving.
I'm extremely excited about this technology. It will be great for software engineers, creatives (2d and 3d artists), mechanical engineering, CAD, ... you name it.
I really hope this keeps getting pushed forward. While I'm using all of my spare cycles on a tangentially-related problem ___domain, I'd be more than happy to donate money and ideas. This technology will be a dream come true if it continues to mature.
@echelon: This is exactly along the dimension we were thinking: VR as working environment for problem solving & creativity (via killer VR office apps that have yet to be invented). If you're curious, we have mapped out our long-term ideas in writing/deck format elsewhere. If you email me, I can send it over to you: george.w.singer [at] gmail.com.
> Wait until resolution improves and we break out of the "desktop" paradigm. We could have a collection of unlimited windows and tabs that exist in a continuum around us, and we could use gestures to organize and surface the contextually relevant ones.
You can get 90% there by just using multiple desktops IMHO, at least that's my experience.
I think in the mid-term, the big win would be ability to have 4-10 large monitors but with a cheaper, mobile, and compact solution with the eyes focused further away.
Headsets have improved almost ~2x in resolution and have halved in price and some have become wireless with better optics. A long ways from 4-5 years ago, but still need another doubling of resolution (or maybe more) and an increase in wearing comfort (lighter, more compact) plus an improvement in wireless latency and maybe a reduction from $400 to $300, and you’re looking at something that would be useful just as a replacement for multiple monitors.
Plus probably improvement with registering where your laptop and mouse are automatically. In principle that could be done with a software update to inside-out tracking software.
Additionally, some improvement is possible with similar to current resolution but with improved subpixel rendering and RGB pixel layout.
Seeing what has been done with the Oculus Quest since I last checked out VR like 3 years ago has left me pretty impressed. A lot of this stuff with multiple windows in this demo could be done natively and wirelessly with the Quest (which runs a kind of Linux). The inside-out tracking is impressively good. If combined with an insert you can put your tracked controllers on Bluetooth mouse and keyboard (so the Quest can register their positions in3D space to allow proper rendering in-headset), it could give you a high productivity workstation experience just about anywhere with WiFi (could be through phone). Hand-tracking (Which works already) could even allow gestures, although I’m not sure how important that is. Can the Quest do subpixel text rendering like ClearType but 3D?
This. I can't see moving my head and making gestures with my hands ever beating tiled windows and good keyboard shortcuts. I don't want a VR enabled workspace (ie I don't want my windows to move when my head moves), but I might prefer a sufficiently high resolution headset to multiple monitors if the software support was reasonably seamless.
This actually used to be the default behavior in Simula a few months ago, and worked really well. There's a few use cases where it imposes a tradeoff though (specifically: it makes it hard hold windows in a specific orientation in space when you might accidentally look at them), so we now use a key binding to do this instead.
> it makes it hard hold windows in a specific orientation in space
I don't know if you're still watching this thread, but it would be interesting to hear an example of when one might care to lock a window's orientation.
Great ideas! The Method of Loci is a very powerful concept, that takes excellent advantage of how human memory works, and works nicely with zooming user interfaces, and is a great way to support user-defined editable pie menus that you can easily navigate with gestures.
I've experimented with combining the kinesthetic advantages of pie menus and gesture with the method of loci and zooming interfaces, including a desktop app called MediaGraph for arranging and navigating music, and an iPhone app called iLoci for arranging notes and links and interactive web based applets.
>MediaGraph Music Navigation with Pie Menus. A prototype developed for Will Wright’s Stupid Fun Club.
>This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.
It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.
>iPhone iLoci Memory Palace App, by Don Hopkins @ Mobile Dev Camp.
A talk about iLoci, an iPhone app and server based on the Method of Loci for constructing a Memory Palace, by Don Hopkins, presented at Mobile Dev Camp in Amsterdam, on November 28, 2008.
DonHopkins 81 days ago | parent | favorite | on: Nototo – Build a unified mental map of notes
>Great idea, I totally get it! Your graphics are beautiful, and the layering and gridding look helpful.
It reminds me of some experimental user interfaces with pie menus I designed for creating and editing memory palaces: "iLoci" on the iPhone for notes and pictures and links and web browser integration in 2008, and "MediaGraph" on Unity3D for organizing and playing music in 2012, both of which I hope will inspire you for ideas to implement (like pie menus, and kissing!) or ways to explain what you've already created.
>A memory map editor can not only benefit from pie menus for editing and changing properties (like simultaneously picking a font with direction, and pulling out the font size with distance, for example), but it's also a great way for users to create their own custom bi-directionally gesture navigable pie menus by dragging and dropping and "kissing" islands together against each other to create and break links (like bridges between islands). (See the gesture navigation example at the end of the MediaGraph demo, and imagine that on an iPad or phone!) [...]
>I like the idea of moving away from hierarchal menu navigation, towards spatial map navigation. It elegantly addresses the problem of personalized user created menus, by making linking and unlinking locations as easy as dragging and dropping objects around and bumping them together to connect and disconnect them. (Compare that to the complexity of a tree or outline editor, which doesn't make the directions explicit.) And it eliminates the need to a special command to move back up in the menu hierarchy, by guaranteeing that every navigation is obviously reversible by moving in the opposite direction. I believe maps are a lot more natural and easier for people to remember than hierarchies, and the interface naturally exploits "mouse ahead" (or "swipe ahead") and is obviously self revealing.
The big win, as far as I can tell, would be to engage the user's spacial memory. (There is a small but non-zero niche for 3D visualization of complex systems: weather, large molecule, etc.) You're going to want to combine "memory palace" with "zooming" UI in a kind of pseudo-3D (I think of it as 2.618...D but the exact fractal dimension isn't important I don't think.) Then infuse with Brenda Laurel's "Computers as Theatre"...
https://en.wikipedia.org/wiki/Second_Life
https://en.wikipedia.org/wiki/Method_of_loci
https://en.wikipedia.org/wiki/Zooming_user_interface
https://en.wikipedia.org/wiki/Brenda_Laurel - https://www.goodreads.com/book/show/239018.Computers_as_Thea...