The Stupification of Graphical User Interfaces

A long time ago simply having a GUI was amazing to those that used it, or a toy to the few million Microsoft DOS users who made up the majority of computer users. But those that preferred a Command Line Interface (CLI) always crowed that they could do things far faster by typing than by using a mouse, and for a subset of them (those that could type 40+ wpm, spent ~30+ hours per week using a CLI, and could remember the commands as well as most remember lyrics to their favorite songs) that was true. Also, scripting is another level upon that which GUIs cannot touch, considering they are like mini-programs. Sure they aren’t compiled, and are much slower than machine code, but they do the same job and anyone with the desire can learn the basics of BASH scripting in a day. But, I digress…

As GUIs made the computer more friendly and more people adopted them into their daily lives, new UI and hardware features were added that had no corollary within the symbol system we have in the physical world. Icon designers struggled to keep up and adopted some symbols from other systems or invented their own piecemeal iconographies since there was no standard icon system to unify the symbol language. As GUI features grew, designers would often make an inspired symbol, and not really make sure people understood them. The World Wide Web’s symbol was a globe with interconnected nodes. And what the hell was this stack of arches? The WiFi symbol which comes from radio: antenna radiation symbolic of the invisible radio waves. While those worked as people got familiar with them, others were a crap shoot. The symbol for a connection has never been solidified as one symbol. One program would show two hands shaking, while another would show a check mark, a third would show a plug in a socket, and still another would show a green light.

The list of features kept growing within programs as each tried to capture the market by being the be-all end-all, one-stop shop for whatever they could be used for. At first these lists of features would fit on one standard legal page. But now some programs would require stacks. Of course newer users, who didn’t grow up among the early explorers of computing, suddenly had to learn all these symbols that had no relation to their paradigm, and many would never learn them in the first place. Instead, they’d rely of the text labels or worse: never know the feature existed.

Obsolete Awareness

Without the contextual awareness of those of us who grew up using floppy disks, the save icon is a weird square within a not-quite square symbol. Also, some physical items are going by the wayside and will be as unfamiliar as 5.25-inch floppies to newer users within 10 years. While still around, and widely used by those without computers, the postage stamps or envelope icons for email will lose their contextual meaning. When was the last time you saw a classic AT&T handset outside of a bowling alley, museum or that rare pay phone booth that still has a handset attached to it?

The point is not to lament these changes but to point out this loss of context and symbolic meaning, in addition to the growing laundry list of features that has led to designers throwing up their hands over how busy computer screens have become when they are looking to simplify things. It has resulted in many different attempts until we reached the design backlash that we have now.

Things are so complex on an expert level user’s screen, such as mine, that the complexity would overwhelm most people. For instance, right now I am running a light load of apps, and I can see three windows behind the one I type in, 28 menu-bar items not related to the app I am using, one mini-player, and the edges of two icons on my screen. If I mouse over to the left edge, my dock comes up with eight permanent icon residents of my most used apps (four currently running), another nine icons of apps currently running, and under the folder division, I have six directories with permanent resident status (Apps, Applications, Utilities, Documents, Home, Downloads) and a minimized Mail window. I am carrying on an iMessage conversation in Apple’s stock Messages app with a friend about a mile away, who is probably lying in bed (it is 12:24AM currently) on her iPhone, chatting about her day. iTunes has a Mesh song playing, and I just got a new message….

On top of all of this, my iPad is patiently waiting for me to get back to this article about Google turning off a user’s account with no warning — locking him out of his entire digital life — within NetBot (an ADN client).

So, I can see the designers’ point. Although all of this is self-induced clutter, I, having used GUIs for over 28 years, am perfectly capable of filtering the noisy screens and focusing in on typing this one article. But I am the exception rather than the rule.

Designing for Mere Mortals

I notice a majority of “mere mortal” users only show one window at a time. They never auto-hide their dock and they leave it on the bottom. The dock on the bottom is a waste of precious vertical real estate: thus the first thing I do is move it to the left and turn on auto hide. I also have an app called “Moom” that will arrange my windows with a few clicks. But the designers want to save everyone from clutter, even me. So, they have done the unthinkable. They have violated the first rule of design: “Form follows function.”

It is okay to break the rules as long as you know it, and have a good reason to. However, I don’t consider breaking them to make things look nicer a good reason. Now, for many apps, functionality follows form: The look of an app is more important than getting something done with it. Why? Designers have been removing shortcuts to features less-often used by mortal users, but essential for us digerati to move at the speeds we do in the name of simplifying things. What they failed to consider is those 5 to 10 percent of people that do use those features regularly. For those users, they are actually making things more complicated and impairing some from advancing their GUI-savviness.

As those of you who have read my previous articles know, I am completely against what I call “the stupification of UIs” because it hurts efficiency. I’m in favor of customizable layouts with optional elements and features advanced users can find and activate. I detailed some of this concept in my blog post about meeting Jef Raskin long ago. I believe that this is the answer to our current UI paradigm so much that, if I could, I would hire Martha Stewart to lay down some of her endorsement (Mc)Lovin’ to say “Options are a good thing” on camera and show it at every dev conference I could find until it became a mantra. Developers would consider adding both approaches whenever anyone asked them to choose between which non-exclusive things to use in their programs. I also believe that the UI and programs should allow for as much or as little clutter as we want. I like being able to hide buttons I never use, such as “buy” and “share” buttons. Oddly enough, the first time I saw this level of customization in an app, it was in Microsoft Word v.5’s before the code base unification. It allowed one to completely customize menus and the list of which menu items appeared and where. That was a step in the right direction.

But Now We Have This

Windows 8 Start Screen

A UI start screen clearly meant for mobile being used out of context: On a desktop that uses all the screen real estate to show huge buttons with colors devoid of meaning that surround icons of clashing detail on a screen lacking both color and layout balance. In short: an inefficient eye sore. And this is just one example, but one of the most current instances of where the entire UI/UX industry is retreating to. I have seen this on OS X, on Chromebooks, on Linux Windows managers and on various mobile devices for years.

I do not know who these designers are. I am sure they meant well. But removing functionality is like retreating and burying your head in the sand. And this is after they led the way in interfaces for many years. This current crop of UI designer’s lack of skill at being able to integrate increased functionality in a clear way with finesse means they are not from an actual Interface background, but a raw static design background. I can tell because that is what I was doing to put myself through school about 20 years ago. If they are, in fact, degree carrying members of interface design, then I would judge them as not actually getting the whole point of a GUI, despite their credentials. They avoid improving on tried and true methods of interaction and the symbol system known to work. They fail to recognize the value of adapting the UX to how people work, nor do they make things meaningful, and thus easier to remember and use. For example, how about using color coding? They are navel gazing and seeing who can come up with the sexiest design, not the most usable. I posit that you can tell that weaker UIs are the ones whose commercials never show someone interacting with them. The Windows Tablet commercials come to mind, as they are more a fashion accessory than computer in those.

And a fashion accessory is what they are pushing because Microsoft’s marketing people know how to push pretty things, but not how to push functional things. If you look at industrial design in computing before the Second Coming of Jobs, you can see that almost no manufacturers considered marrying form and function. It’s like trucks and hammers: The contractors and carpenters of the world only care that they can do their jobs more easily with less downtime. But marketing only knows how to push and sell attitudes because they have spent the last 50 years training people to respond to evocative advertising, not rational advertising.

These UI designers are looking for the approach that makes people think, “That’s new and interesting!” And as we all know, a lifetime of advertising has conditioned people to think that newer things are instantly better. I heard the funniest thing at a club last night: “Oh, you only have the 4S…,” a woman with an iPhone 5 said with pity to a friend of mine. Was she oblivious to the fact that the “poor woman” was running the latest version of iOS as well? Probably. While my iPhone 4 is long in the tooth, and I hope the Flash memory holds out another few months, I am not chomping at the bit for the newer model because it is new, but because of a feature I am pretty sure might make it into iOS 7. Well, that and the fact it will allow me to get more storage — 16GB is way too small these days.

I could handle the other platforms scrambling for the new hotness Apple brought with OS X’s “screen you want to lick.” But that new hotness was nothing without a solid UI/UX behind it. iOS would not have been the hit it is and ushered in the modern glass-faced smartphone without the engineers nailing the basic functionality element in the first version. But now, it looks like most of the large companies have lost focus on what really matters to users. Apple itself has done next to nothing to revise the UI of the first iOS, and has fallen behind even dead mobile OSes.

The executives making these blunders either don’t know or forget that a person doesn’t develop allegiance to a platform because of a nice look. They develop allegiance because of how easy it is to use. That ease does not mean making a person click through three screens to get to the feature they want to use or, worse, making a person look for the feature for a few minutes among a mess of screens because the designer wanted a cleaner look. It comes from allowing a person to do something quickly and providing feedback that guides them but gets out of the way of those familiar with the device’s operation. It is that simple, but it looks as if this simplicity is lost on them. They have confused simplicity with simple looking screens.

At the end of the day, a computer is only a tool. But a tool that requires you to slow down, switch mental modes or impedes you in any way from accomplishing something quickly and easily will make a person less likely to choose the same tool in the future. If the amount of allegiance to a platform based upon the pleasantness and ease of interaction is a foreign concept to some, then let me put it this way. Those people that you love and grew to love in your life might not be the prettiest people out there by modern standards (in fact odds are, they aren’t); they might not be the most stylish or perky, but your love of them is rooted on things that matter more than superficial things: reliability, support, respect and forgiveness. Without these things, a relationship — whether it be between people, corporate entities or human–machine — is not likely to last, especially when a prettier machine comes along. So please, keep that in mind.

Comments

  1. BY Johnny Hopkins says:

    I understand the point of the article. The Windows 8 interface works well with Windows Phone 8 smartphones and tablets, not as well with conventional laptops. The key may be more trained individuals in user experience that understand design descisions from all angles. Microsoft is bring back the start button since they have gotten so many complaints since October 2012 when Windows 8 came out. I am hoping the 8.1 upgrade will resolve some of these issues but I am a little doubtful that it will address UX problems.

    • BY M Noivad says:

      Exactly, What works in one paradigm doesn’t necessarily work i another. There is a push for unification which if approached the right way is good. But I tend to think it is best when usability in the uniting factor: feature parity, such as being able to do everything, such as account management on a mobile device as opposed to having to open a desktop version of the site. Twitter is a good example of a failure to integrate feature parity in any of it’s mobile products. Consistency is a good target in terms of consistency. However it is a mixed bag, and is tougher to adapt UI that works on both touch and mice driven interfaces or varying resolutions.
      One thing I to don’t like in terms of a touch interface vs. a mouse driven interface is when the mouse interface uses touch-sized targets: A finger needs a larger target to hit reliably, but in a mouse-driven UI, that space is wasted. And Microsoft is not the only guilty party in this respect, but they are a good example of a unified UX that applies the wrong parts of what a company should go for in terms of creating a consistent user experience. Thanks for you comment.

  2. BY JELaBarre says:

    I think a lot of re-design is done not because someone has a grand scheme on how to re-work something (whether their idea has merit or not), but is more often than not trying to justify their jobs. After all, if you aren’t doing a redesign, why are we employing you?

    • BY M Noivad says:

      The TL;DR answer is: yes, I agree that in some cases, it is just designers trying to keep their job.

      The longer answer:
      It certainly looks like that is what drive some redesigns — Facebook’s quarterly/semi-annual updates for example.

      The other side of that scale is when a UI works fine but users get bored of it, case in point: the constant harping of people dismissing iOS’s UI as old and mold because Android’s design was changing with each release while iOS has gone 6 years without a facelift or significant improvements. My main concern is that things that work, stay functional or are improved. A fresh coat of paint (a graphics update) is nice, but an addition to the efficiency of the UI is more welcome in my book. Honestly, I think people that decry the staleness of some GUIs miss the point that if you do it right, very little needs to be changed. In this case it is simple iterative design changes and an occasional update of the graphics is all that might be needed. I think designers are very influenced by other designers in the industry, and besides protecting their position, they like to refine their work and play with graphics. I know that when I was doing graphic design as part of my job, it was more a case of wanting to refine it. But I had many other roles in that job, so I didn’t have to redesign anything in order to keep my job.

      Thanks for comment.

      • BY CD says:

        The solution should be more skins – you change the color and flare to look at without changing the layout or functionality. Funny how designers underestimate the usefulness of skins. They could still keep their jobs – even making things look prettier and more professional – without messing with the functionality.

  3. BY Sid M says:

    Windows 8 GUI cluster**** is the reason I use Start8, I got that program and a Win8 Pro license for under $20 and when combined are a great OS, faster than Win 7 but with the same GUI.

    Also sold my small MSFT stock holding after this utter stupidity, now they are bringing back the start button but not the Start Menu, more moronic ideas from a company that once defined the computing landscape.

  4. BY CD says:

    This whole snaze over functionality is something I’ve complained about at various times on my blog (example: http://wp.me/p2dM4Z-5N). No-one in the creative arts industry seems to be listening.

    You’re right about allegiance AND about people wanting the NEW thing. However, I do note that there are some people out there who prefer the new layout – mostly people who 1) work with mobile or 2) are trying to teach computers to old people. … or to put it one way: a minority assisting seniority.

    If we exclude the above, there are two markets: the business world and the regular consumer world. The business world is like you – they want something that does the job. The consumer world is different – they want something that is pretty and doesn’t break. The problem we have is that businesses are merging those two worlds of commerce. Why? – It’s cheaper! XD Surprise! Why would a company want to build two OSes? Apps are making a TON of money these days – so if you gear everything towards being an app (ignoring how almost utterly and completely useless most apps are), which is what companies are doing, you ignore your other market.

    To offset the balance, there needs to be a push from the business world towards something they find useful. This could be a nitch market for some small business to get into, but building an OS alone – or even modifying one – is a huge task. Is it worth the risk? How will they market it? How can they be sure that their scheme, their UI, will be the best out there? Testing and crowd surveys can be costly. Is it worth the investment without guaranteed return? You tell me.

  5. BY M Noivad says:

    You raise some good points, however, there are a few assumptions or things that I should make clear.
    First, I think Win8 Desktop is just an example of the problem. People want to focus on it, but I am really talking about all of the latest facelifts that make things harder to use.

    Second, the UI in Win8 specifically was designed for mobile, then moved to the desktop which is why mobile users like it, and people such as the previous commenter are resisting upgrading without their useful-within-the-desktop-paradigm Start Button. (Until it is added back in with W8SP1.)

    Three: While businesses drive long term large volume sales, they are very slow to adopt platforms. Also once chosen, the large business momentum makes it very difficult to switch. If you look at all the real disruptions & revolutions in computing _devices_ since the Personal Computer came out has always been started by consumers and small independents looking to change do things better. Businesses have never driven computing change unless they make the actual hardware or software.

    So, you’re right, consumers want the new shiny and that’s why there is so much emphasis on it. They are more open than a large business to adopt a new platform. But this is a clue as to why businesses will not lead the charge. Why? Businesses do not want to spend the millions of dollars replacing what works with something because the executives do not have a firm grasp on what makes this tech worth the investment. The executives themselves are ill-equipped to define what they need for their workers because they make the decisions, but they do not carry them out. An example of their grasp of the advantages/disadvantages of certain interfaces, etc is that most think using a web browser is just as good as an email client such as Thunderbird. (Web browsers are not.) So, they cannot lead usability, because they do not live in that world that actually does the work. They tend to be dilettante users because their job does not depend on them knowing how to use those tools well. To those ones technology is a tool at best and a cost center at worst. (This applies more to the non-CTOs/CFOs since both tend to have to have serious tech chops and a lot of spreadsheet, formula/macro experience, respectively. And before someone chimes in that they are a very hands on executive: this is a generalization. But great! Now push for everyone at your company to be savvy and learn their tools by showing them how enjoyable it could be.)

    Fourth: more important than merging expenses for developing computing products is the idea that learning one platform should allow one to apply their consumer experience to business products. People do not want to learn 2 fundamentally different ways to think when they are trying to accomplish the same task, but this does not require something to look the same, but to act the same. In fact I think a company can make a lot more money by adapting things to fit the environment they will be used in. It is contextual adaption, and a sort of “Computing Darwinism.” There is room for many species in this world, but those that survive and flourish most adapt to their niches more appropriately to survive. Which leads me to…

    Last thing: Microsoft does not need to develop 2 OSes, just 2 interfaces to the same OS. iOS and OS X are fraternal twins: they look different, but they have most of the same DNA. However their expression of that DNA is different, with each adapted to its environment. iOS=mobile: large icons for touching, invisible file system with an Interface that is App-centric, not Document-centric as OS X desk-side motif is. The reason for the mobile emphasis on simplicity speed of access is more important while mobile. Desktop OSes are made to manage larger document sets. So, the more organization is needed, Also, the document-centric model makes data interchange among apps easier. This speed of access and ease is what makes people that have never been taught what the “Desktop” means in terms of symbology of a Desktop OS find point at what you want easier.

    It takes me about 15 minutes to explain the desktop paradigm with the HD being a filing cabinet and RAM being the size of your desktop to people that have used computers for decades but never really understood what they were doing, nor what certain symbols mean or represent, such as why the power button looks like it does (It’s a combination of 1 [on] and 0 [off] in case some reading don’t know.)

    Also, I am not sure what apps you are speaking of as useless. (Games?) Most iOS apps I wrote about here over a year ago and those recommended on my blog have saved me time and money countless times.

    Thanks for your comments, and thanks for reading.

    • BY CD says:

      I see.

      As for two OSes: I think I was recalling Win 8 and Win 7 and how 8 does alot of things directed more towards casual users – such as the behind the scenes handling of applications that led to the duel versions of Mozilla Firefox. Hard to explain – I don’t know all the details. But I’m seeing this behind-the-scenes stuff as scooting away from the typical business action and more towards speed for the user. Maybe these changes were a good thing – I’ve never tested them to find out. I just see it as a precursor to operating systems that are becoming less oriented towards business.

      I do see your point though: The fronts can be different while the backside remains the same. The question at that point might be: How different should each side of the coin be (by that I mean, how different is “advanced” from “simple”)? I don’t know about you, but when I see a different front, I sometimes think “It should do something different, right?” And in some cases, it might.

      With respect to these different interfaces, let’s take, for example, the Vista Control Panel. There’s a Classic View and a Normal View. Both can get you to the same popular spots, but both are very different in appearance and sometimes lead you through menus you won’t find in the other. The Normal View is supposed to be more straightforward (and it is, depending on what you are looking for), so it has as smaller menu and more generic categories to begin with.
      Is this the sort of thing you had in mind?

      Concerning buttons: Out of curiosity, would you prefer all-text buttons? (Not to say the button can’t look like a button, just that it doesn’t have a symbol on it.)

      As for the apps – I was thinking of clock apps or the n_th app for uploading images to facebook. I seem to stumble upon alot of those. Maybe that’s just my luck.

      Thanks for your response.

  6. BY Lucy Mao says:

    I don’t think this is a black or white topic to debate. Neither is the subjectivity of aestheticsm. Usability is no longer about functionality but also the knowledge and capability of the user…in other words, user preference. This can be clearly seen in the mentioned PC vs. iOs quarrel. I see so many avid mobile users adhere to either platform depending on preliminary habits. I also see Apple fanboys switch devices not because of UI but arbitrary features (storage, bigger screens, and customization). I am actually one of those swingers – a UI/UX designer myself, I felt iOs to be too restricting at times. But now I understand why user restrictions can be a good thing. Users don’t know what they want. Apple’s intuitive design without a doubt makes things five times easier even for Ice age users. But Android keeps things exciting despite design inconsistencies. And Windows? They are undergoing an experimentation stage meanwhile trying to build a product that once single handedly transformed computers. I believe as we dive into the Web 3.0 there should be a higher tolerance to interface fluidity and user experience as a whole. Just because it’s not working for you doesn’t mean it’s all broken.

  7. BY Linlu says:

    I have supported Windows for at least 20 years now, probably more than that. I have used every version of Windows starting with version 1 or 2 (iirc). As for Windows 8 I hated the lack of a start menu that I also bought Stardock’s Start8 within the first half hour of usage. What is astonishing is that despite my years of daily usage and support of Windows, I still have ‘fun’ finding things. I’ve been on my new shiny mac for about a week and I almost never have to google where to find something (something I still do in frustration for Windows). OS X so much more intuitive and useful than Windows has ever been. I used an original Mac 128 years ago in school and loved it. Then reality hit and I had to support PCs.

    The design in OS X (sorry don’t know what they dumbed down) just makes sense. I can right click on almost anything and get to what I needed within one click. OMG, Microsoft Windows is click-click-click-back-click-click-click. BTW I did use the search feature but that only finds you the overall program, not where in that specific control panel the option is that is driving you crazy. For the Mac, the only real annoyance was the ‘natural’ scroll, since I am more of a scroll south to go down and scroll north to go up person. That was solved quickly with a bare outline of what to do from a friend who replied to my facebook comment about how much I loved the Mac. You would not believe how hard it can be to find the same option in Windows or try to talk someone over the phone as to how to find it. Microsoft has a nasty habit of changing the words used to label things, and for most users, you have to tell them EXACTLY what sentence to look for. Microsoft is also well known for moving things into entirely different categories, so if you are using a different version, unless you have a photographic memory (or another machine to look at with the user’s version), you can waste a lot of time trying to get the user to the right place. Honestly, I dumped whatever photographic memories I had of where to find things with each new version since Windows started to get ‘seemingly’ easier to use (really it was all relative).

    Anyway for comparison I also recently received a 12GB i5 windows 7 notebook to setup for another person. My 4GB Mac Mini can keep up with it easily. Talk about bloat, and I am actually a fan of the Windows 7 interface (at least I was until I started using my mac).

    BTW: I used to support OS/2 back in the OS wars days. I miss the SOM desktop, especially the project folder feature. I wish the Mac could implement that. Put all your related work (documents, applications even) into one folder, set it as a project folder, logout and/or shutdown, then when you come back, you simply open the project folder and everything in that folder opens. I think some apps would even return you to the place you left off at. It’s been about 20 years so my memory is quite fuzzy on that particular capability.

    Oh well have to run, just found out my son ate the push pop that my daughter had bought yesterday and is not at all happy….

  8. Pingback: Why Demand for Interaction Designers is Rising - Dice News

Post a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>