In their defence (not the defence of the WYSIWYG box itself) - they quite likely have done lots of user testing that has shown clear desirability, improved value to user, improved usability, etc etc.
You have to remember Slack's target persona is probably no longer the Engineer (If it ever was) - it's more likely a much less tech-savvy employee who finds WYSIWYG editors very handy to create rich text inputs.
I guess my point is - I'd wager this wasn't "rail roaded" through by some senior stakeholder that no-one can speak up too, but was probably a decision made by a product team who have the data to back up their decisions.
Now if the above isn't true (and perhaps the opposite is true) - then agreed, those are the signs it's time to leave.
> they quite likely have done lots of user testing that has shown clear desirability, improved value to user, improved usability
I have never worked at a company that did user testing or if they did it was always done in a way or interpreted in a way to back up the designer's opinion. I don't think I've once in my entire 40 yr career seen a designer test with users, find out something was bad, and change their design based on the test.
Has any one else?
To be clear I have seen a designer test themselves and re-design but I've never seen them test with users and re-design. I've also seen them change a design, put it out in limited release, then claim "we didn't get any/many complaints so it must be okay" without a thought that the majority of users never complain (either don't know how, can't be bothered, never considered it might be useful, clicked the feedback but that doesn't actually make it back to the designer, combinations of all of the above)
I agree that there is a temptation to not do it, especially for smaller changes, especially if time is tight, especially if you are a smaller company, and all of that is certainly a problem.
But no user testing? Not changing the design? We just recently drastically changed the design of a new product we are developing because it failed initial user testing (with clients of us who came in for user testing using a paper prototype).
What followed were also several rounds of design critique sessions with the revised design (bringing in internal people who had nothing to do with the design and ask them for their constructive feedback, without aiming to find a solution to what they find during that meeting) and we will bring those original clients back in in December and sit them in front of the then working prototype.
We are a small company and I do agree that we do this far too infrequently but from my viewpoint as someone who makes the design I couldn’t even imagine not reacting to people who clearly have trouble with the design. Because I did the design. I know that it’s made of thousands of little decisions, thousands of little trade-offs you have to make. I know that it’s a hard problem and that it’s easy to make mistakes or to fundamentally misunderstand something about the mental model the users have. So any information about what works and what doesn’t is extremely valuable.
> But no user testing? Not changing the design? We just recently drastically changed the design of a new product we are developing because it failed initial user testing (with clients of us who came in for user testing using a paper prototype).
your company is the exception. Most companies do no user testing. The closest i came was working at a co that did user interviews and ran some mock-ups by them before turning to eng. Once it was in our hands there was no change based on user feedback, because there was no user feedback before release. Not because we weren't willing to, but because ... they just never did.
Most folks i've spoken to do no user testing whatsoever at their companies.
Me neither. I am currently of the belief that most of that "data driven" product development is just bullshit, and not really even data driven. It's too easy to just plug Google Analytics and proclaim you're "data driven". And if all you do is telemetry and A/B tests, it's way too easy to make them favor whatever you want them to favor, either purposefully or accidentally.
I haven't seen much evidence that would say otherwise, but there's plenty of circumstantial evidence favoring my belief. Like this case.
We develop simulations for events and often watch how delegates interact with the UX to determine if things need amending... never called it data driven even though we do track scrolls and clicks within the software to determin user flow and pinch points.
The reason the Windows 95 interface was so much better than what came before it was because Microsoft did loads of user testing and changed their design heavily based on it.
I completely agree. There have been some improvements in the design (XP and Vista's Start Menu changes, XP and 7's Task Bar changes) but they're incremental ones, and Microsoft's attempts at big changes have not gone well.
I work in the UX team for a major online retailer in the UK. We spend a large part of our time testing and validating our designs with real customers - not much goes live without us being sure it's the right thing. If something doesn't work for people, even if we think it's great, then it's gone.
How many times has that actually happened though? I talk to designers all the time who say some variation of this but have literally never done any of those things.
Depends on the task, but if it's for a big project we'll go through multiple rounds of usability lab testing before a design gets built.
Then, after launch we'll do rounds of optimisation based on the data we get back. This bit doesn't happen as much as we'd like as newer projects take priority, but it does happen and we're working on ensuring the Build > Measure > Learn loop is an integral part of the process.
We're lucky as senior management understands the benefit of testing and iteration. One place I used to work, the chief exec thought they knew everything, so didn't understand why we wanted to research and test stuff all the time. So we ended up having to do what they wanted, rather than what we knew the customer needed.
> I don't think I've once in my entire 40 yr career seen a designer test with users, find out something was bad, and change their design based on the test
Too, too true. I have seen designers proceed to "educate" their test users (sometimes for weeks at a time) as to why the users' preferences are wrong and the designer's are right.
I work for a relatively large global tech company (100k+ employees), and our UX teams does this all the time. First they come up with a design they _think_ will work, then do user testing, and often the user is confused or doesn't like a specific thing - could be a button color or an entire flow - and we change it. Then repeat.
I think what your UX teams are doing is (or comes across as) the opposite:
1. Come up with a design they think will work
2. Do user testing looking for confirmation that it works.
3. Change things (based on user feedback) in their design until they find confirmation.
I've come across this, and I wouldn't say it's data driven. It's, if anything, data supported, but it is still sounds like the initial design/idea still comes from within and there's a chance that it's down to their taste, interests and convictions. Or worse, a trend. We tend to pitch our ideas along with the data to support them so it's natural to seek confirmation as part of our normal workflows, but that's very different to taking design decisions based on telemetry, user feedback, etc.
Now, it's possible that I'm completely wrong and your UX teams actually does that. For instance:
1. You get support tickets from users about a missing or difficult to use feature.
2. This prompts team to design act. Design decisions are made, user testing happens, feature or changes are released. Once released, you notice from your telemetry data that the feature is rarely used by the wider audience, perhaps some of its UI items are used more than others.
3. UX team goes back to the drawing board to try to improve visibility, and does user testing.
Success rate is usually higher in that situation since you know users actually want that feature or change, and it's just about getting it right.
In Slack's case, I'm left wondering if any users actually wanted this change.
We did the whole bit: Hired a special company that had rooms with one-way mirrors so you could watch the users use the thing.
It was brutal.
One person went to this really cool subpage, and we were all so excited to see what they were going to do. User waved the mouse around for a few seconds and then clicked back. We were jumping up and down in our seats, just losing it. Our special thing was so cool (we thought) and this person literally couldn't begin to understand what it even was.
It's amazing to see a game design fundamentally fall apart when users get a hold of the product and then see the game designers tearing their hair out. They do usually double down and call the gamers dumb or something lol.
Where I work we do a small amount of user testing to validate our design choices. We recently just planned out a release worth of work to improve user experience because we missed the mark with our design.
If we didn't do this we'd lose to the competition.
When I worked at EA we regularly performed user validation, we had a whole room set up for proctored user testing. Hired professionals to conduct the testing. We also invested a heap into A/B testing and had a whole team dedicated to tracking this and analytics in general. This was just for marketing/launch web sites.
I've been working in the industry for 20 years and while the norm is as you describe there are certainly plenty of exceptions, especially when you product lives and dies by UX.
The company I work for has done this. We constantly A/B new designs, and if the new one loses out to the control, that's it. Back to the drawing board, it's gone.
That doesn't mean everything necessary gets chucked out immediately. Sometimes we'll then test individual parts of a design (failed or otherwise) to see if those do better on their own. And complete redesigns with similar aims do sometimes get created (but they're usually completely different in colour scheme, layout, text, etc).
We also do user testing for the same stuff. That too is more important than a designer's opinion would be.
I'm a designer and I do user testing for every feature in the app I'm working on – usually using Sketch or Framer X for cases where better prototypes are needed, and in mostly all cases the design changes because of it. It's very hard to get it right in he first try, and I can't believe I'm the exception here.
Funny thing too: because I usually do the implementation of my own designs (react-native app so not so big of a barrier), I usually discover limitations of the original design and have to tweak it to better fit the medium.
I'll back you up on that. I've had 20 years in the industry, doing mostly front end work across web, mobile, desktop, automotive & TV. Mostly contracts, about 40 clients - mostly small but also some big ones you've heard of. Only three of them did (minimal, small scale) user testing, none of those made any major changes as a result of user testing.
> I have never worked at a company that did user testing or if they did it was always done in a way or interpreted in a way to back up the designer's opinion
My teammates just came back from an on-site user test, and the lead designer's own report describes how some features which he had pushed for were not working out for the users.
We have always done user tests, and we have always applied the results if they were meaningful, from balance to UI tuning to, sometimes, scraping entire features. There's plenty of companies that do this, and individual designers that really care about the user more than their own ideas.
Most of the established large tech companies do User experience research sessions while doing several iterations of mocks during the process. This is prior to AB testing. My current and last company both followed this practice.
I happen to know several instances of Google, Amazon, Facebook, and even Apple not doing this. But I guess "established large tech companies" is a big list and GAFA is only 4 companies. I realize that these instances are not 100% of all the work those companies do. Would be nice to read more articles from them on their failed tests and what led to their current designs.
> probably a decision made by a product team who have the data to back up their decisions
lol.. I don’t know how, but this wool has been pulled over everyones’ eyes. These “data driven” decisions are often anything but. The people making them usually don’t even have sufficient background in stats to be capable of making them. They just plug and chug in some NHST framework or A/B testing framework, making all kinds of test errors or even outright cheating since their job incentives, much like failed research incentives, requires a constant stream of new positive results that causally drive growth & engagement. Since they “have to” find these things, then anything which can be politically argued into the product will be made to “have the data to back it up” (even if it doesn’t really).
The bigger the company, the worse this effect gets.
IMHO they missed the fact that power users would have preferred the old behavior and didn't provide the opportunity to switch between the two behaviors
This. Power users (normally engineers) drove Slack usage at my last company. I'm now in an org that uses Teams and all the engineers want to use Slack. Likely won't happen but it's important to note that this feature isn't just annoying a small segment of users, it's annoying the small segment that are most vocal about adopting Slack. And who due to economic power arguably have more influence.
The problem with that is Slack no longer needs a small segment of vocal users. They used to, but now they're a huge established company with plenty of service contracts across tons of industries.
I'm not sure how Oracle really squares with that theory. From what I've seen, courting executives can work pretty well for companies widely hated by users.
They can be vocal about getting people out all the want, won't make them effective at it, especially if most people don't understand why power text editor users hate WYSIWYGs. Once contracts are inked and the whole company gets hooked on something like Slack good luck getting it back out.
"engineers" drove your company to spend money on a chat tool from a company that had leaked private chats, gave no real control over the interface, and used a closed protocol? Now those same people are complaining about it?
Sure, but at least have this a configurable option.
Right now, they just basically blew away something that worked really well for alot of people and are demanding their captive audience adjust.
Whoever at Slack pushed this and threads has a good case of the Jony Ives going on. Thats two UI/UX changes that have met with lukewarm reactions to absolute dislike.
The simple truth is that significant UI features should default on and be possible to disable on launch.
Give me a simple text edit option (heck, even Jira lets you choose between the two because they understand users have different preferences) and I'll be fine.
This video[1] from slack.com front page has 3 chat windows. 1 of them is about two engineers chatting about a git pull request, so I guess 1/3 of their target persona is a developer?
If usability testing was an actual, consistent driver of UX decisions, we'd still have skeuomorphic interfaces. I'm sure somebody crafted a UX test or metric that showed the WYSIWYG editor is superior, but this was probably done to back up an existing product decision, not as good-faith research.
You have to remember Slack's target persona is probably no longer the Engineer (If it ever was) - it's more likely a much less tech-savvy employee who finds WYSIWYG editors very handy to create rich text inputs.
I guess my point is - I'd wager this wasn't "rail roaded" through by some senior stakeholder that no-one can speak up too, but was probably a decision made by a product team who have the data to back up their decisions.
Now if the above isn't true (and perhaps the opposite is true) - then agreed, those are the signs it's time to leave.