The Facebook App on iOS: How to ruin your user experience

Intro: A (still useful) way to describe a user interface
Nearly twenty-five years ago, a widely acknowledged book was published: “The Art of Human Computer Interface Design”, edited by Brenda Laurel. It contained a wide range of essays and articles about all different aspects of designing interfaces.

In there, a barely recognized (?) article was contained: “Interfaces and the evolution of pidgins: Creative design for the analytically inclined”, written by Thomas D. Erickson.

This article conceded a striking similarity between the syntactic structure of computer interfaces and pidgin languages, an auxiliary construction of trade languages used by traders and merchants in the Caribbean since the age of discoveries.

In essence, the stated similarities were centered around a rather simple notion of “object” and “desire” on market places (sugar cane + buy is a well-formed pidgin sentence formulating an intent) and on computer interfaces, defining the minimal needed level of syntactic complexity. An object, with that regard, could be anything selectable on a computer interface (a file, an icon), and a desire is any particular action that could be applied to the object (open, copy, move, delete).

For most parts, this has become common knowledge under the coined term of a “contextual menu” since then. The beauty lies in the variability of the desires that an object can be connoted with – and the term “object-orientation” is nothing but the approach to formalize and standardize the syntactic richness of this construct.

An object has a varying set of desires connected with it. Depending on the context the objects appear in, different desire groups are connected with any object. “File”->”Open”, or “Stroke”->”Width”->”Edit” are denoting the same syntactic structure, despite referencing different objects and different desires. The cognitive challenge for users is the grasp that every object always acts as “object-in-context”. Changes in the context create new objects that different desires can be hooked to.

What Facebook did to piss off their users
Screen Shot 2014-05-16 at 9.11.49 AM
A friend of mine found out yesterday that Facebook had removed the option to sort the timeline by “Most recent” from the Newsfeed column header on the iPhone client. A commenter added that it hadn’t been expelled entirely, but that it has been moved (or “exiled”, as he put it) to the side bar.

What has happened on the interface was nothing else than a fundamental change in the “Object-Desire” structure.

Earlier it worked like this:
1. Pull down the news feed to reveal the news feed column header
2. Tap the column header for the news feed
3. Tap on “Most recent”

In short: “News Feed (Object)” -> “Column Header (Object)” -> “Select sort order (Desire)”.

According to this source, this is how it now works:
1. Tap the “More” tab at the bottom right of the screen
2. Scroll down to the “Feeds” section
3. Tap the “>” icon to the right off the ‘Feeds’ label to expand the section
4. Tap on “Most recent”

In short: “News Feed (Object)” -> “‘More’ icon (Object)” -> “Feeds section header (Object)” -> “Expand icon (Object)” -> “Select sort order (Desire)”.

“OK”, you say. “One step more. Who cares?”

If you look at it closely, you’ll see that the change has been a lot more profound: the object references have been changed along with lengthening the path.

Where in sequence 1 the object context was changed from “Newsfeed” to “Newsfeed column header”, in sequence 2 the object context was changed from “Newsfeed” to “Menu icon” (labeled “More”) to “Menu Section” (labeled “Feeds”) to “Expand icon” (labeled “>”).

In both cases the user’s goal (get the desired sort order for the newsfeed) decomposed into particular manipulation actions for the interface elements themselves. Earlier, the user needed to understand that “tapping” opens a list of selection possibilities. Now, the user needs to understand that in-feed manipulation is no longer possible at all.

The issue behind the issue
Particularly on small screens this has been an issue for a while now: former in-screen element manipulations had to be moved to menus or overlays, while content element manipulations remained on the main screen. In fact, today we see a divide that simply didn’t exist when Erickson wrote his article: it is the distinction between content actions and configuration actions. While content actions bind user gestures (click, sweep, pinch) to particular manipulations for particular elements on-screen, configuration actions are influencing the entire experience more generally and persistently. And when it comes keeping the sort order persistent (“Most recent”) – this is where Facebook has become infamous for failing.

Sort orders for lists, earlier conveniently toggled by clicking column headers, are moved away from the screen and underneath a menu. Other sites (like this one) are using the menu for organizing its contents, and for nothing else. On this site, there simply are no persistent views to configure, no login accounts to be saved, and no colour schemes to be applied.

And yet: we already do witness a slight proliferation of menu icons recently that reflect this increasing need to group stuff into different experience buckets: the “hamburger icon” (☰) for content menus, and the “gear/cog icon” (⚙) for settings menus.

The idea of expanding and collapsing functional selections in-screen that made native apps a clear advantage over HTML5 “web apps” earlier are losing their competitive edge by now. Increasing functional selection possibilities require to de-clutter the interface, but the increasing complexity needs to go to some other place.

“Oh, I’ve got it!”, the UX designer says. “Let’s move it out of the way”. “Oh yes”, the business stakeholder says. “Let’s demote it to a hidden menu with a cryptic label, so people will forget it existed. And we can display our content stream the way we can generate the most revenue out of it.”

As we see: “moving a problem to a place out of sight” is not the same as “solving a problem”. Or why would you guess FB is using the “hamburger icon” together with the label “More” to denote their one-and-only menu?

For users of regularly updated apps this obfuscated handling of the content/functionality distinction creates an unnecessarily steep learning curve.

A way out?
Maybe it is about time to re-think the approach of expanding functionality on small screens once more. Similarly as the right-click (or Ctrl/Cmd-click) had opened the realm of offering a set of relevant varying contexts on desktop computers, we seem to be in desperate need for a similar thing on mobile devices by now, reflecting the increasing complexity level of object manipulations.

The repertoire for this on small-screen devices is quite limited, though: Pretty much all the gestures and interface actions are already taken. A click (“tap”) on something opens a detail view or lets the user perform a one-click action, a double-click is not very well suited for practical reasons, and pressing the “square” button closes the view. We can’t have anything much more complicated than that, though.

From off the top of my head: My personal favourite for this long overdue usability enhancement would be the click-and-hold action applied to reveal some information about the context of an element. Unfortunately, we have the mobile device’s OS interfering with that, as the available context options are currently hard-wired into the devices themselves.

I would be very surprised if there weren’t a gazillion of blog posts about that topic out there already. In case you come across something good with regard to it: I’d appreciate it if you threw it in a comment below.