Dream is now available in early access on the Oculus store, so we'd really appreciate any feedback and thoughts people have. We truly believe that immersive technologies like virtual reality can make remote work and collaboration better than existing 2D form factors - especially as the new standalone VR headsets like Quest come to fruition in the coming year.
Dream has been built entirely from scratch, so we got to rethink a lot of the stack. We prioritized certain things, like networking and UI, and we're really proud of the outcome. Doing so also meant it took us a lot longer to bring a product to release, since there was a lot more to do - but it allowed us to integrate WebRTC at the native level as well as chromium (by way of CEF) so we can do things like bring up our keyboard when a user hits a text field.
Hope people like it, and want to say thanks to everyone that made it possible!
I'm a big believer in the potential of VR for remote collaboration and as a fully remote VR development team we already make some use of it. Congrats on shipping but from what I've seen so far its not clear to me what this offers over Bigscreen or Oculus Dash desktop sharing? There's a bunch of functionality I'd like to see in this space that nobody has really nailed yet that I've seen but while this looks like an interesting project the articles I've seen so far don't do great job of explaining what new functionality it brings to the space.
Would be super interested in what kind of functionality you're specifically looking for, or is it more that existing functionality doesn't hit the mark? Our goal with this release is to get some great feedback so we can make Dream better, and more useful for users so please let us know what you'd like to see!
To respond to some of your other questions - one thing that we've noticed isn't super clear is that Dream is not doing any form of desktop sharing. We integrated chromium at the native layer (by way of a forked version of CEF) and the content views are all web browsers. This allows for a level of integration with the rest of our stack in a way that is difficult or impossible to achieve if you're doing desktop sharing (we actually built desktop sharing, but disabled it in the build for now until we can solve some inherent usability problems).
We're big fans of Bigscreen, but I think they're heavily shifting their focus on entertainment and watching movies in VR together. Also, we were working on Dream for 1.5+ years when Dash was announced and were excited to see some similar ideas there! We're trying to find ways to make VR as a viable solution for remote working and collaboration, and this has led many hard decisions we've had to make - especially as we've decided to build the entire stack. This obviously meant it took us a lot longer to get something out there, but as a result Dream is a lot more intuitive and seamless than you might expect.
For example, our keyboard was heavily inspired by an early Google VR experiment we saw, but after building out a version of it we quickly understood why it wasn't getting people to a viable text-entry solution. We built our own collision system and "interaction "engine" to allow views and apps in Dream to respond at the event level of "touch start, touch end" similar to what you'd expect in building an iOS app - and underlying this the interaction engine would be updating the collision / position of everything in real time. As a result, we've seen people hit 30-40 WPM on our keyboard due to the tactile cues we've included (audio/haptics) as well as a kind of activation region, which allows you to really time and feel out the key press. Definitely hard to describe this or show this in videos since it's all happening at 90 FPS - but hey, it's a free download so give it a shot!
Dream never asks you to revert to your monitor or take off your headset, this was a strict rule. This means that everything from logging in, to inviting someone new to your team had to be possible in VR. To accomplish this, we create a kind of chromium integration with Dream so that we could run web views that manipulated our engine directly. To us, asking the end user to remove their HMD for any reason is equivalent to asking them to restart their computer - it's really not acceptable.
Our goal is to demonstrate how immersive technologies like virtual reality can enable remote collaboration and communication use cases. Especially in terms of how VR, by comparison to existing 2D formats of video/voice, provides an improved layer of presence through nonverbal communication cues.
Ah, ok, yeah that wasn't clear to me from the articles I saw, it sounded like desktop sharing. I went to the BigScreen talk at Oculus Connect and yeah it's pretty clear that their focus at the moment is on people watching video content together in VR but we have used it with some success for code reviews (desktop sharing is key there, I can jump between Visual Studio and the Unity Editor).
Your keyboard sounds interesting and I'll have to check it out but to be honest it's not a big selling point for me as a touch typist: I don't have any trouble using my keyboard in VR. We do have some non touch typists on the team though and it's not always convenient to put your Touch controllers down to type so I can see it being useful.
My ideal VR collaboration app would support at least solid desktop sharing support, well implemented VR whiteboards (including annotation on the shared desktop) and 3D sketching like Quill / Tilt Brush. We use whiteboards and 3D sketching in Rec Room but they're quite primitive. The sketching doesn't have to match a dedicated sketching app but should be better than Rec Room.
It would also be useful to be able to easily import 3D assets for review, Dash support for GLTF is looking like a good implementation of that. Custom environments would also be useful for us so we could do collaborative design of environments for our own VR app.
You should give Hubs a try, it's a WebVR based collaboration and communication tool we're working on at Mozilla. Screen sharing is behind a feature flag (and not quite working atm) but you can bring in GLTF models, images, and do simple 3d drawing. Also, by being the browser, it avoids a lot of the installation and coordination steps of native apps, and works on phone and PC.
I was actually at that session as well! It was a bit surprising for me to hear that they're taking such a strong stance with regards to shared movie watching, but generally it's always felt that Bigscreen has been tuned for gamers / entertainment type use cases.
Totally hear you with regards to being comfortable touch typing while in VR, but I think that this is a pretty big barrier for a lot of users that are not as comfortable in VR. In our early experiences demoing Dream to people, we noticed just how overwhelming going into VR is for a lot of people that have had either no exposure, or very little to it. We used to have computer keyboard pass-thru, and this could be something that we add back as we continue to iterate and make the experience better.
In terms of desktop sharing - we used to have this capability, and it's still in the build but disabled. We pulled it back due to some inherent usability issues that we're working on as well as performance limits on low-end machines.
Annotation (whiteboarding on shared content or a white screen) is next up on our road map, we just didn't have sufficient time to get it into the initial release - so excited to hear that's something you would be looking for. Similarly, 3D model import / review is something that we're about to tackle as well. One of the big things we're excited about exploring is actually using chromium to do this vs. forcing every client to download what could be a big file, or push performance limitations on a varied set of machines. Instead, we'd find a way to utilize WebRTC to stream the content in a way that provides a 3D review experience for all clients with no performance limit.
On environments, we agree as well - right now we have one environment, and have 2 others in the pipeline. In the future, it'd be great to allow for 360 stereo videos to be used as the environment or allow teams to customize their environments if they've got the in house capabilities to do so!
Thanks for the feedback, and hope you get a chance to try Dream out a bit and give us some hands on experience if you get a minute a well!
I've always thought that conferences / education / seminars was the big sell here. Productivity didn't seem like a starter, simply because work is such a cultural thing with all of its own stresses and getting people to adopt these new techniques is like pulling teeth. Students and conference goers who want to save on the extremely big bucks that travel and classrooms and conference halls can cost will see this as a huge win.
Also, the isolated / focused environment of VR could be a big plus for learning as it blocks out so many distractions. I'd love to see a study done around that.
I'm also extremely excited about education applications for VR, especially those that benefit from real time communication. For example, learning a foreign language from a tutor that lives in the origin country of that language - and being able to interact with them naturally, including the various nonverbal cues that are crucial when learning a new language in the same ___location as someone that has it mastered.
At a slightly higher level, I think VR can unlock a lot of "centralization of expertise" type use cases. Basically, there's some resource that is distributed normally but is required to be centralized due to the way that expertise is consumed. For example, things like call centers, or tutoring - if those people could instead operate from wherever they might be located while providing their expertise to customers wherever they may be located, this could be super useful for both providers of said expertise as well as the consumers of it.
Definitely excited to see what kind of things applications like Dream can enable!
Great work on this. I lead a remote team, and if the vision here were fully realized and available on the Oculus Go/Quest, I’d trade some travel budget to equip the entire team with it.
You asked what features it would need. My use cases are basically what we do when we bring the team to one ___location for face-to-face work. While there are use-cases for typical distributed work, there aren’t many things we can’t do with the tools available: Slack, Google Docs, email, git, etc.
What is missing that only VR can solve?
Generally:
Space. No matter how many monitors we place on our desk, we run out of real-estate quickly. Collaborate planning and problem solving often requires space to visualize our thoughts and ideas.
Social presence. When human problems overshadow technical problems (i.e., all the time), the feeling of being fully present to one another is necessary.
Specifically:
A whiteboard. In VR, you could make something far better than the 10k devices out on the market today (I’m making this number up, since “request a demo” is the standard price)
Emotionally engaging avatars. Seeing a head turn isn’t enough. We need to be able to see another person’s unique facial expressions in order to connect. With goggles covering half the face, I don’t know this is possible, but ML research in 3D facial reconstruction from 2D photos has shown a lot of promise. Perhaps in-painting combined with reconstruction would do the trick.
Beyond these two items, I see a large space for reimagining UI in VR. Unlike a monitor and mouse, VR knows generally where you are looking, and in the case of Oculus, what you are saying. Voice commands that are contextual to eye gaze (tracking would be better) could be combined with VR’s version of a right-click context-menu as an HUD. It doesn’t need to be as crazy as Iron Man, but given a limited number of options, voice commands seem an ideal fit for VR.
A few years ago, I prototyped a HUD/Voice UI as the central feature in a VR browser, but I found resolution on my DK2 to be too limiting. Maybe the time is right to try again. I’d love to see someone try!
Thanks for the kind words! We are totally on board in terms of getting Dream on to the upcoming Quest, and standalone VR headsets in general. We've had a bit of success in getting businesses to install VR set ups in a conference room, but when this stuff is a $400 standalone device that doesn't cannibalize your work computer or phone I think it completely changes the viability of using VR for productivity / collab (especially on the go).
Thanks for the feedback on what kind of stuff you'd like to see! Deeply agree on the whiteboard thing, and it's one of the next features we want to tackle on our road map. In particular, we're excited about the ability to annotate either a white screen (normal whiteboard) or content that is being shared. The sensitivity of the controllers plays a big part here, but even for something like marking up a presentation or document this could go a long long way.
I agree that VR headsets have a bit of ways to go in terms of detecting facial gestures and emotions. I've seen some really cool tech using putting galvanic response sensors in the facemask, which could do everything from detect overall eye direction (up down etc) as well as emoji level gesture detection. I think over time, these features will become standard in most HMDs. In the meantime, we tried to balance the line between uncanny / creepy to useful and engaging. Perhaps it doesn't come through as well in the video, but when you're in VR just the head motion / hand position goes a long long way. Also, we built our entire stack to ensure that we're able to send this data as quickly as we get it from the HW - and keep our latency down, this provides for a surprisingly engaging experience and would love to hear what you think if you get to try it sometime!
We are keen to explore the addition of voice and contextual gaze based actions. We had a bit of this built before launch, but for the feature set we launched with didn't really use much of it. With regards to voice, we are planning to integrate with something like Google Voice Engine but wanted to make sure we had a good text entry method for things voice has a really hard time with like URLs, passwords and usernames - these were in the critical user flow, since they're required for logging in / creating accounts as well as selecting what kind of content to share. We also added Dropbox / Google Drive integrations to make bringing in content more fluid and intuitive, so overall you can kind of see where we've been prioritizing but we definitely have a long ways to go and more to build!
- You don't appear to be using SDK layers for your screens, they would probably help with text readability.
- UI isn't very discoverable and I couldn't find help.
- I couldn't find a way to zoom / scale a web view, this would help with text readability on a site like HN if I could set the zoom to say 150%.
- Related but different, I couldn't find a way to move / scale a panel.
- I couldn't find a way to rotate my view and I spawned at an awkward angle so had to stand at an angle to my cameras to face the large screen. I'd normally expect to find snap turn on the thumbstick in most apps.
- The keyboard works quite well, nice job. I feel like a Swype style keyboard could work even better for VR though.
- Keyboard behavior was a bit unintuitive on sign up, I tried to touch the next input field to select it and my keyboard disappeared, I didn't immediately notice the next / previous buttons on the keyboard itself.
- There's not a lot to do initially as a lone user. Some kind of onboarding / tutorial would be good and it would give a better initial impression if there was a bit more to do. I know the focus is on collaboration but I think you would benefit from a hook for individual users trying it out.
Thanks for taking the time to leave detailed feedback like this. We need as much of it as we can get.
A few of your points come down to discoverability / help. You are correct. We agree that before the app comes out of early access we have to drastically increase that. Once we understand that the core functionality is serving its intended purpose, we will turn our attention to onboarding, help and discoverability.
It's also fair to point out that pixel density in the headsets aren't really great yet. We are of the opinion that legibility will be a problem that solves itself in the near future. User frustration might force us to reconsider that opinion sooner rather than later.
Lastly, with regard to the position of UI, it is something we have worked on quite a bit and there is still plenty of room for it to get better. We have shied away from infinitely adjustable UI size and positioning and instead tried to make something that positions itself automatically. Right now, the UI positions itself based on where your head and hands are and how long we think your arms are. The implementation is pretty naive and we intend to improve it over time. This might be another area where we have to get more mechanical to decrease frustration.
Yeah, headset resolution is limited but that's why it's important to make the most of what you have. As I understand it the SDK quad and cylinder layers bypass the lens distortion pass and effectively analytically ray trace the optimal sample point for direct display. Carmack is always telling people to use them for anything text heavy. I haven't done a side by side comparison but this is probably why Dash text is a bit more legible than yours. Future headsets will have higher resolution but we're years away from high enough for this stuff not to matter.
As an experienced VR user my expectation at this point is that I can move panels around with my hands and ideally scale them too. Doing that well and in a way that doesn't trip up novice users isn't trivial though, Dash still has some work to do there.
I don't mean to be overly critical, my first VR app (for the DK2!) was written from scratch in C++ so I know how much work goes into developing something like this without an engine. I want this type of app to succeed so I can use it though so the feedback is intended to be constructive!
I don't take this as overly critical. We need and welcome feedback.
Looking at layers for text is something we will definitely do. We are happy with SDF text in our UI, but text inside of Chromium is miles from acceptable and will continue to be a focus.
As far as scalable and user positional UI, we currently hold the opinion that it doesn't offer enough value to compensate for the complication that it introduces, especially for novice users. At the same time, we realize that is a contrarian point of view and user frustration may force us to change it. We have to remain open to changing opinions like these.
Agree with Doug's response, but wanted to quickly say thanks for pointing us at the SDK layers thing - will dig into it!
We spent a fair bit of time building out this pretty unique node-based pipeline architecture, where the HMD is effectively just a sink node. We made an attempt to make text / content more legible by the way of an FXAA implementation, which would have limited performance impact as compared to something like MSAA - but it definitely can get a lot better. Will dig into that stuff, and hopefully we can improve it further!
We were definitely anticipating the movie. We saw it as a team. We had hoped it would do more for general recognition and understanding of VR, but I think that was not the effect. Obviously that was in no way the job of the filmmakers. They are there to make an entertaining movie, not advance the adoption of VR.
I will say though, the book itself was a big part of me personally deciding to get into this project after I left my last one.
Watching the video I was struck by my own experience of creating a virtual art gallery (of real world photography and 3D renders) in Second Life.
It felt really amazing when I was alerted by a sensor in the gallery that someone was visiting it and I could teleport back to the gallery to meet them. Their ___location in the virtual space (which pictures they were stood in front of) said something about the pictures that they liked. I could read something about their personas from the avatar they were using (especially in conjunction with other scanning tools). My own little gallery was just one of many and other organisations and groups created much more impressive interactive environments (admittedly a lot of them seemed to be for various forms of role-play, some very unsavoury).
The promo video shows the participants in effect in a completely conventional conference room - one screen and chairs around a table. The wider space doesn't seem to contribute functionally at all - it's a pretty backdrop but doesn't display more info that contributes to the meeting. So I'm curious - could this sort of capability be used to create more dynamic interactions - or are we limited by the tech (tethered interaction by seated people) to more constrained situations? (please don't get me wrong - I'm supportive of the concept - but I'm encountering pushback from colleagues and customers who don't see the potential)
I will say that we at Dream, hold a pretty contrarian point of view on this. So let me start by saying, it is an opinion, but one we hold pretty strongly. I can talk a little bit about locomotion, but this sort of philosophy applies to many of the design decisions we made, for better or worse.
Dream doesn't allow for locomotion by design. Dream is meant to be a place where people meet to be productive. The environments are intentionally pretty but not distracting. The focus is on interaction with the other participants and the shared content. We feel that removing locomotion and reducing dimensionality is how we will make the interactions with Dream simpler, especially for new users. Mechanisms like teleportation are super fun and certainly add to immersion and are the right choice for all sorts of VR experiences. However, Dream has been built from the perspective that users are here to collaborate and then go back to real life. In that context, something like teleportation is fun and novel the first time you use it, but the 10th time, we feel like most users would just prefer a menu. The overall idea being, reduce dimensionality to increase precision and simplicity.
I'm happy to hear a critique of this philosophy. Ultimately we have to create software people love to use, and we certainly understand that we might be proven wrong about this.
I work in finance and talked to somebody who implemented a Bloomberg terminal wall in VR. This would be a great application for VR since you never can have enough Bloomberg terminals open and usually never have enough screen space.
However he told me that VR can not be used for anything productive right now because of many problems. One thing is, only very few people can wear a VR headpiece for 8 to 10 hours straight without getting serious headaches and dizziness. The resolution has to at least be one order of magnitude higher for meaningful sized fonts being properly readable. The head pieces have to be one order of magnitude lighter. If you want to do more than a 2D wall displayed like a canvas in 3D you have to solve the problem that your eyes automatically try to focus differently on different depths which is also one major source of headache.
All in all I was convinced by him that the VR technology is at least 2 generations behind what you would need for serious work. Until then all kinds of software, SDKs and Hardware will change dramatically. Hence investing in VR productivity software development right now is a complete waste of time and money.
> Hence investing in VR productivity software development right now is a complete waste of time and money.
Do you think advances in technology just happen by themselves? We need people to invest time and money RIGHT NOW to get to better software, hardware, and mainstream adoption.
I think the application of VR for finance applications like what you describe is definitely a really exciting use case that will become more realistic as the headsets improve in terms of comfort and resolution - per your comments. We actually explored some other similar use cases, for example customer service - where a VR HMD could in effect replace the 2-3 monitor / desktop + headset set up that customer service reps drive to call centers to use every day.
However, these "full day" applications are definitely a few years out. We've seen sessions of dream where people have little issue of up to 90 minutes, and on average something like 45 minutes. Effectively the duration of a meeting - and this is where we're focusing our efforts, where productivity and collaboration intersect.
I think that from the perspective of investing into these technologies as a financial institution to change the way people access financial data all day, it's a not here yet. However, to get there, we need to start building these platforms today. This has been our intention, and think that even in the meanwhile - being able to have a remote meeting with 4-6 people that are all over the world. Then being able to bring in content like presentations or generic web pages will provide a level of utility out of reach of even some of the $300K teleconferencing solutions out there.
Regardless of the hype the industry has received, these headsets are still in their infancy. One of the big steps is about to be taken by the upcoming Quest headset due to launch Spring 2019 by Oculus. The big step this HMD takes is detaching the umbilical cord to the computer, and providing the first fully 6DOF headset (both HMD and controllers). They even through in a healthy resolution bump, which we're excited about. This well could be the 'blackberry' moment for VR.
I'm sorry, what's the value added here? In the demo video (in vimeo) the team is going over some work tasks in Trello. Considering the bad HMD resolution and general clumsiness of doing things such as typing, I'm really unsure as to what is the value added of doing a meeting in Dream VR session as opposed to just regulard desktop/webex type of thing with possibly desktop sharing and voice/video conf. I work extensively with VR and I have a hard time seeing the catch here. Thanks!
It sounds like you have easy access to VR HMDs, so if you get a chance to plug into an Oculus rift any time soon I recommend you try it out - and would definitely love your hands on feedback.
We've done a fair bit of jiggering to make sure a majority of web based content is both legible and usable - and our UI has been built to try to be as intuitive as possible, and eliminate a lot of the bumbles that we too associate with many VR experiences.
It's a free download, and you don't have to create an account if you don't like - once you download, you'll be presented with an account sign up / log in form where the keyboard can be used and messed with a bit. We also use chromium for all of our login / account creation flow in VR - so you can get a taste of what that feels like as well. If you want to try something like Trello out, just create a throwaway account and never verify the email - then you can pull up a website like Trello or NYT, where you can assess the usability and legibility.
I think that if you're coming from a place of comparing this to existing 2D based collab tools like Skype / Zoom etc you'll have a hard time seeing the benefit, but if instead you try to look at how those tools are insufficient compared to a real-life meeting you might see where we fit. Our goal is not to replace 2D based methods, but to allow for a level of presence previously only possible with in-person interactions. This shines in particular in situations where you're meeting with three or more people along with content at the core of the meeting.
Hope you get a chance to try it, and would love to hear what you think and how we can make it better!
Thanks for the reply. Incidentally I've also worked on a bunch of problems you guys must have had related to the UX in VR. For example I've also implemented a bunch of virtual keyboards ;-)
Are you using straight CEF or have you improved the compositor to composit directly into a texture? IIRC CEF only provides the composited web page in a bitmap and then you're going to have to do repeated texture uploads which is going to be a drag.
Awesome to hear that! Would love to check out your work if you have a link / video or anything like that!
We actually forked CEF - and had to make a few changes to allow for integration in the way that we needed. We do use OSR mode, and update a texture in that way - although we need this buffer anyways, since we're sending video frames across the peer to peer mesh - so even if we did go straight GPU, we would still have to download the buffer from the texture.
It's a drag, but there are a number of techniques to improve the performance. Resolution is one great approach - since the resolution of the HMD makes having a high resolution on the browser kind of useless, so reducing the resolution also reduces the pressure on the GPU. Also, we can limit frame rate based on the kind of content being used - and also, we can leverage dirty rects to optimize for content that isn't changing. Since we're running multiple browser tabs, the latter technique isn't as useful for a particular page, but makes it more performant when a user is doing multiple things, like watching a video on the shared screen and scrolling through wikipedia or a news site like NYT.
Up until we consolidated the build for Oculus release, we supported OpenVR and still do in our code - just not the Oculus build. We've gotten a lot of interest in the Vive build in this initial release, so might look to reintroduce that. Before pushing to Oculus, Dream would just launch off the desktop and detect which HMD you had plugged in and then launch the appropriate platform. Shouldn't be a ton of work to bring it to Steam!
How possible is it for users to have haptic gloves for typing instead of using controllers? I remain positive on VR productivity tools in future but think we have to get flexible and creative on hardware. Personally, I would love to have collaborative meetings with my colleagues worldwide, even just to demo my modelling ideas on a whiteboard which I think would be tremendously helpful (I believe FB keynote last year also voiced the same sentiment). I fully agree with the hardware limitations at the moment but certainly don't think investment and work now is a waste of time. To push this area forward, we also need to find compelling experiences that are unique to VR, like remote presentation rehearsal, collaborative white board brainstorming sessions for 3D design etc. Wearing a VR headset to work 8 to 10 hours straight is not the answer I look forward to, at least not for now. What VR is strong about to me todate are: minimising limitations resulted from physical distances and fading memories of past experience, as well as its ability to create limitless imaginary worlds, boosting multi-dimensional communication.
Early on we actually built out support for leap motion in dream, and this was super cool because of the networking stack we built - we were able to send all 20 points per hand in real time at 90 FPS at low latencies. This was really an amazing experience, but there were a lot of issues we simply couldn't overcome - like wrist occlusion where your hands would suddenly fly off into the distance, or when your hands didn't do what you intended due to incorrect data from the sensors. As a product minded company, we had to make the hard decision to hold on this - at the end of the day, users don't care whose fault a bad experience is, they just uninstall your app and never come back.
Really excited for new HW and capabilities to become available commercially. We built out our keyboard to be effective without anything but what's currently available (6DOF HMD with 6DOF controllers), and we'll continue to expand support for commercially available capabilities. Maybe it's an unorthodox perspective, but we really only want to ship and represent capabilities that any user can attain easily - and not tease things that are soon to (but may not ever) come.
Agreed, the name collision is unfortunate - we actually incorporated before Daydream was announced. If there are real issues / conflicts with the name, we're the smaller guys here (our team is just 4 people), so we'll obviously make the corrections that we need to make over time!
Let me know if my response to the Bigscreen question above is sufficient, or if you have other specific questions about anything. Would be happy to dig deeper into anything. Super excited to share the hard work of our team - we've basically been quietly coding for years now, so this is the first time we really get to talk about what we've been up to.
Isnt resolution the big killer for all of these applications? Even in the video Trello looked really blurry, i can only imagine what it would look with the Rift instead of an Vive Pro for example.
Resolution is a big deal for sure, but because we're using chromium instead of desktop sharing we are able to set the resolution to ensure fair visibility for most content sources. Generally, we see people trying out Dream and bringing up CNN or NYT and having little issue with reading articles. Sure, if the resolution of the HMD was better we could do a lot more - but we set the parameters to optimize for content viewing, and also added FXAA to help without hurting low-spec machines in terms of performance.
Dream is currently only available for Oculus Rift - and the video was actually shot with an in-engine camera that we developed, and captured on a mirror pipeline at 4K - so I think the blurriness in the video may be an artifact of streaming potentially? Here's a link to the vimeo, which might let you watch it at the set 1080p resolution we scaled it down to: https://vimeo.com/291432708/4c32095226
We're excited to get Dream onto other HMDs, especially the mobile standalone ones coming soon - really great that the Quest is going up to 1600x1440 as it will make use cases like ours work even better!
Dream is now available in early access on the Oculus store, so we'd really appreciate any feedback and thoughts people have. We truly believe that immersive technologies like virtual reality can make remote work and collaboration better than existing 2D form factors - especially as the new standalone VR headsets like Quest come to fruition in the coming year.
Dream has been built entirely from scratch, so we got to rethink a lot of the stack. We prioritized certain things, like networking and UI, and we're really proud of the outcome. Doing so also meant it took us a lot longer to bring a product to release, since there was a lot more to do - but it allowed us to integrate WebRTC at the native level as well as chromium (by way of CEF) so we can do things like bring up our keyboard when a user hits a text field.
Hope people like it, and want to say thanks to everyone that made it possible!