New Projects

This seems like such a cliche way to begin a blog post, but I'll do it anyway: I haven't written much here in a while. Part of the reason is because I've been focusing on some forthcoming academic writing and have been focusing more on getting Praxicum into a launch-able state. I've also had some speaking engagements that have taken up a lot of time. But this is not me complaining; this is me celebrating.

While working on my deck for Interaction 14, a few people mentioned, "you should write a book on that." I've always wanted to write a book, so I decided to ignore any feelings related to these people "just being nice" or "not really meaning it" and explore some possibilities.

My main conclusion: publishing is complicated. Given that I'm still on the novice end of the writing experience spectrum, I want to try and establish as much interest as I can before actually pitching the topic. That's where you come in.

I set up a landing page to assess interest. If you might ever want to read a book on philosophy and design, just enter your email address to show your support. Hopefully I can use these numbers to show potential publishers how many people are interested in this admittedly esoteric topic. 

I started Praxicum partly because I think too many design publications, especially in the UX world, are too practice-focused. While this is important, I wanted a place where we can discuss, interrogate, and critique design and technology without necessarily feeling compelled to also include "practical takeaways." 

I envision this book doing something similar. I don't have a formal outline written up yet, so I'm not sure about the exact content. But if you've read the stuff I've written about in the past couple years or have seen me talk, you probably have a good idea. If you're not familiar, check out the deck to get a preview. 

These types of conversations are important to me. If they are to you as well, please consider supporting it:

Book Review: Age of Context

Earlier this week I picked up a copy (or rather downloaded a copy) of Robert Scoble and Shel Israel’s new book The Age of Context. Given my interest in context-aware computing from a design and philosophical perspective, I wanted to see what a tech pundit and business consultant had to saw about this trend. I’ve been following Scoble’s talks and tweets on the subject for a while and always felt that while they are excellent for creating wider awareness, they lack analytical depth. Unfortunately, the same is true of this book. Instead of interrogating new technological capabilities or performing a deep analysis of potential effects on consumer culture, Scoble and Israel instead chose to rely on listing product after product while repeating the same concepts over and over.

But that’s not to say the book isn’t worth reading. One of the most interesting things about it is, even if perhaps not intentional, it frames the discussion of context-aware computing not around the concept of intuition, but rather of the uncanny (as I’ve written about before). That is, instead of relying on colloquial notions of what is “intuitive,” they avoid this trap and instead refer to the function of technology that “knows you” as uncanny, which is, I think, much more accurate. There is only one instance of the uncanny reference, but even the single mention frames context-aware computing as that which is so familiar it’s too familiar. Scoble and Israel later later give this the name “freaky factor,” which is a bit unfortunate, but I think what they’re really getting at is the sense that context-awareness creates a relationship with technology in which the system “knows too much.” The “freaky factor” line is really the difference between familiarity and the hyperfamiliarity.

I wish they would have taken this a step further and attempted a definition of context. They take for granted that readers—and perhaps themselves—have some unarticulated definition of context that needs no further explanation. This is far from the case. There are a number of ways to define context, and each definition frames further discussion in a certain way. At the very least, they could have used someone else’s definition. 

Another great point they made is on the concept of big data:

“So, there’s lots of focus on the “big” aspect of data. It sometimes gives us the image of truckloads of data being heaped upon existing truckloads somewhere up in the cloud, creating a virtual mountain so immense it makes Everest look like a molehill. In our opinion, the focus is on the wrong element: It’s not the big data mountain that matters so much to people, it’s those tiny little spoonfuls we extract whenever we search, chat, view, listen, buy— or do anything else online. The hugeness can intimidate, but the little pieces make us smarter and enable us to keep up with, and make sense of, an accelerating world. We call this the miracle of little data.”  (Scoble and Israel, Kindle Locations 337-343)

While “miracle” might be a bit hyperbolic, I appreciate that they took care to point out that the “big-ness” of big data does not necessarily equate to deeper insight. This is a point that has been missed in much of the early discourse around big data, but lately many qualitative have been advocating for a more humanistic approach to data. This “little data” is key to successful interactions between users and context-aware systems

A major downfall of the text is the constant name dropping of various context-aware products and sponsors of the book, and the unrealistic future use cases for these products:

“Perhaps you paid for your adult beverage in advance with a web-stored credit card activated by a nod, blink or gesture your digital eyewear understood.” (ibid, Kindle Locations 606-607). 

Or referencing the TV show Cheers:

“When his regulars walked through the door, Sam poured them their usual drink without asking what they wanted. As he handed them the drink, he asked questions that showed he understood what was going on in their personal and work lives. This is a lot better than telling you what people who bought the book you just selected on Amazon also liked. It takes a lot of baby steps to get from there to the Cheers retail experience.” (ibid, Kindle Locations 1050-1053)

These types of example use cases only serve to dampen the importance of studying context. While the book is supposed to have something to do with privacy concerns, the first use case above deliberately undermines any sense of security users can have with a system. And in the second example, Scoble and Israel seem to imply that context-aware computing will soon result in the “Cheers retail experience,” forgetting that greeting someone by name only has value is you actually know the person. Knowing someone’s data and knowing someone as an actual human are two completely different things, but this difference is ignored.

Finally, the greatest downfall of this book is that the authors completely fail to engage with the vast amount of research on context-aware computing. The topic has been vigorously studied by both academics and practitioners in fields ranging from computer science to anthropology to cognitive science. But this history is completely ignored in The Age of Context. The theoretical foundations of their claims, case studies of what has worked and what has not, early prototypical examples of today’s technology, philosophical examinations of meaning and context...all ignored.

I have been compiling a reading list on context if you’re interested in going deeper on the topic. It contains texts from many fields focused on what context is, why it’s important, and how context-aware computing is a paradigm with which designers should concern themselves.

Scoble and Israel had a big opportunity here. I am happy that big names in the tech industry are talking about context. However, I am disappointed that they chose surface level description over critical analysis. To be fair, I’m sure the authors were aware of these criticisms and came across research while writing and thought anything beyond surface description would not be enjoyable for a “general audience.” But if we don’t pay attention to theoretical work, the practice of context-aware computing will never reach its full potential.

Designing for Transparency and the Myth of the Modern Interface

This article was originally published over at UX Magazine. 

Over a decade after Mark Weiser’s publications on calm computing, we’re finally reaching a point where technological capability matches our desire for ubiquitous computing and so-called natural user interfaces. However, taking a lesson from artificial intelligence, just because we can create a system does not mean we are ready to design it.

The next frontier for calm computing is the idea of an “invisible interface.” Much of the interaction design community has been frantically trying to promote the idea that digital screens are becoming outdated and to establish preliminary “best practices.” Barring a few notable critiques, the discussions on invisible interfaces have thus far been mostly optimistic—perhaps too optimistic.

The arguments in favor of invisible interfaces are making a few key mistakes, namely:

  • Many are assuming that invisibility equates to seamless user experience
  • There is an assumption that an interface can be either visible or invisible
  • There is a conflation between interfaces in general and digital, screen-based interfaces
  • They are not taking in to account the vast amount of theory available about how humans interact with technology

In what follows, I elucidate these points through a discussion of Martin Heidegger’s analysis of technology and objects in the world, arriving at a new solution: transparent interface design.

Theory, Practice, Experience

The discourse around “invisible interfaces” has been mostly a binary discussion: either visible or invisible. But interfaces are not simply visible or invisible; like all other technological objects, they exist on a spectrum of functionality ranging from conspicuous to hidden. “Visible = cumbersome” and “Invisible = seamless” is a problematic distinction to make, as it implies that any piece of technology can exist fully in one end of the spectrum or the other. Interfaces are necessary modes of interacting with the world and its objects. To render an interface invisible is to hinder meaningful interaction.

Heidegger’s work represents a fundamental shift from previous models of self-world interaction based on a hard split between mind and body, to an embodied approach that articulates how knowledge and understanding are products of active engagement with the world. Interaction design and user experience have much to learn from Heidegger’s thinking. Perhaps one of his most influential, albeit understated, contributions to our current topic is the idea of goal-orientation. The idea that people interact with objects in order to accomplish certain goals comes directly from Heidegger, who also held that this tendency to fixate on future goals results in a state of being ahead of ourselves.

We might be tempted to take this kind of idea for granted, but its importance cannot be stressed enough. One implication of this is that active interaction with objects is a necessary component of achieving goals.

[W]e ordinarily manipulate tools that already have a meaning in a world that is organized in terms of purposes. To see this, we must first overcome the traditional interpretation that theory is prior to practice. [...] To understand a hammer, for example, does not mean to know that hammers have such and such properties and that they are used for certain purposes—or that in order to hammer one follows a certain procedure, i.e., understanding a hammer at its most primordial means knowing how to hammer.” (Dreyfus, 1991)

Hubert Dreyfus, one of the most vocal proponents of Heidegger’s philosophy of technology, is using Heidegger as a lens to point out a fundamentally different approach to examining technological products. Instead of relying on theoretical modes of analysis, which have been the norm in philosophy since Plato’s time, Heidegger called for an approach focused on embodiment, praxis, and engaged interaction. We only come to understand the world through active manipulation of objects; the image of philosophers sitting around in circles discussing the mysteries of Being is a thing of the past.

As Dreyfus puts it, knowing that a hammer is made of wood and metal is not nearly as meaningful asknowing how to use a hammer. In other words, meaningful knowledge comes from first-hand use, not theoretical exploration. Or as Paul Dourish explains it, “Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.” (Dourish, 2001)

Systems of Objects

Can an invisible interface fit within the framework of embodied interaction? Taken to its logical conclusion, we can think of the myth of invisible interfaces in terms of a desire for radical immateriality, the urge to do away with interfaces all together. What is missed, however, is that active manipulation is necessary to establish a meaningful relationship with the object of engagement.

The natural user interface community has been working on articulating what it means to reduce the effects of an interface, as opposed to doing away with it completely. But it is easy to slip back into an argument that centers on the idea that there is a direct interaction with content that somehow moves through an invisible interface. If we can accept Heidegger’s claim that knowing how is more useful than knowing that, then we might say that even if users are able to directly manipulate content through an invisible interface, it might not be desirable to do so.

Could it be that Heidegger’s theory is simply outdated? Certainly. Heidegger’s work on technology was completed in the time of industrial machines, not mobile devices. There are definitely parts of his work that need to be rethought, as thinkers like Don Ihde have done. But most of Heidegger’s core thinking is still quite applicable, even if it needs modification.

Take the idea of “equipment,” for example. Heidegger posited that we experience objects either as present-at-hand or ready-to-hand (please excuse the awkward translation from the original German). Present-at-hand suggests that we experience an object from a detached, objective standpoint. This is the idea of knowing that, or understanding that comes from pure thinking without action. Ready-to-hand is the sense of embodied interaction, that knowledge and understanding comes from active manipulation, orknowing how. Traditionally, these two types of experience are viewed as either/or, with Heidegger arguing that the vast majority of experience is ready-to-hand, but in hopes of updating his thinking to a more modern discussion, I’d like to propose we think about these dimensions as a single spectrum rather than two categories.

A technological product is not simply usable or unusable—visible or invisible. There are degrees to which something is usable; extents to which an object of technology is conspicuous.


Although different from invisibility, the concept of an intuitive interface is worth examining here. In this context, it seems that the desire to create an invisible interface is the myth of invisibility taken to the extreme. As I have argued—and as have others before me—calling an interface “intuitive” is problematic for a number of reasons. First, it implies that there is no learned behavior involved, that using the interface is somehow instinctual. Second, it assumes that there is an inherent meaning that can be known by a user, and therefore a designer must simply determine that meaning and embed it into a system. If semiotics has taught us anything, it is the meaning of a concept or object only occurs in context with other concepts and objects. So the idea that an interface can be invisible or intuitive seems idealistic. On a more concrete level, we should be designing systems that support active learning through interaction, not simply trying to do away with their more difficult parts.

Heidegger calls the act of dealing with an object’s shortcomings “coping.” Humans are constantly forced to accommodate for an object’s poor design, broken parts, unintended uses, etc. When a user encounters a mobile app, for example, that does not perform as expected, he or she faces a choice: adapt to its poor design or abandon use. As Dreyfus explains, the act of coping applies characteristics to the act rather than the object:

“When the hammer I am using fails to work and I cannot immediately get another, I have to deal with it as too heavy, unbalanced, broken, etc. These characteristics belong to the hammer only as used by me in a specific situation. Being too heavy is certainly not a property of the hammer.” (Dreyfus, 1991)

The common technological mode is one of concealment: the object’s true nature remains concealed behind its use. There is a difference between a mobile app that is caught up in active engagement with a user and one that isn’t. For an object to be usable, it needs to maintain some level of concealment. This concept of concealment lends itself nicely to the argument for invisible interfaces: if concealment is necessary for a user to act through an object to attain an eventual goal, then the interface between user and goal ought to be invisible. But this view flows into a binary opposition that implies usable interfaces are invisible and unusable interfaces are visible, or as Heidegger might say, conspicuous:

“We discover [an object’s] unusability, however, not by looking at it and establishing its properties, but rather by the circumspection of the dealings in which we use it. When its unusability is thus discovered, equipment becomes conspicuous. This conspicuousness presents the ready-to-hand equipment as in a certain un-readiness-to-hand.” (Heidegger, 2008)

Notice that Heidegger is careful to explain conspicuous objects not as present-at-hand, the supposed opposite to ready-to-hand, but rather is purposeful to label it as un-ready-to-hand. This category implies that there is a middle ground between present-at-hand and ready-to-hand, a continuum between the two poles in which objects are not necessarily unusable but display a level of usability. Depending on this level, users experience the ability or inability to cope with flaws. Usability, then, has little to do with visibility or invisibility and more to do with the potential for creative coping.

Neither Visible nor Invisible: Transparent

It is clear that the dichotomy of visibility and invisibility is inadequate. When we look at the nature of the word “invisible,” it literally means that which cannot be seen. There is a strong connection to a human’s perceptive abilities but this says nothing of intention: the invisible object might be so either by design or nature. Bacteria are invisible to the naked eye by nature, while a book purposefully hidden under a blanket is invisible by design.

A related but significantly different word is “transparent,” which literally means “through sight” or “through appearance.” So the transparent object is something the observer knows to be present even though he or she cannot see it. Active manipulation toward a goal is still possible with a transparent object in a way that it is not with an invisible object. Take the example of a mobile device’s physical screen. We can see through the glass to the colored pixels that represent different types of information that allow for different tasks. But we interact through the glass; it is transparent but not necessarily invisible.

The transparent interface is one that allows both fluid interaction and active manipulation. It is neither intuitive nor invisible; it exists but is not entirely conspicuous. The user is aware of it and manipulates it to learn from it, acts through it to accomplish goals, creatively misuses it when necessary, and copes with its flaws.

The desire to create invisible interfaces or describe current natural user interfaces (voicegestural, etc.) as invisible is a mistake. A change in vocabulary from “invisible” to “transparent” is not simply a semantic quibble; it is necessary to frame the discourse and mindset around better interface design. Invisibility is an impossible and undesirable goal. Transparency allows for movement, flexibility, and adaptation between different modes of interaction, which is necessary for modern systems design.


Startups and Knowledge Work, Theory and Practice

In his Being-in-the-world: A commentary on Heidegger's Being and Time, Division I,  Hubert Dreyfus attempts to summarize Heidegger's position on theory and practice as follows:

“Heidegger seeks to demonstrate that what is thus revealed is exactly the opposite of what Descartes and Husserl claim. Rather than first perceiving perspectives, then synthesizing the perspectives into objects, and finally assigning these objects a function on the basis of their physical properties , we ordinarily manipulate tools that already have a meaning in a world that is organized in terms of purposes. To see this, we must first overcome the traditional interpretation that theory is prior to practice.” 

Heidegger was so vehemently arguing against the ancient Greek (notably Plato) view of theory over practice, the fetishization of the theoretical ideal, which was carried over into much of Descrates's thinking. Heidegger held that we very rarely experience the world as detached, objective, disembodied observers; and even when we do, these experiences provide very little understanding and knowledge of the world. Our experience of the world is embodied. We are in an intimate relationship with the world we inhabit. Thus, knowledge of the world comes from active manipulation of its objects. Dreyfus continues:

“To understand a hammer, for example, does not mean to know that hammers have such and such properties and that they are used for certain purposes—or that in order to hammer one follows a certain procedure, i.e., understanding a hammer at its most primordial means knowing how to hammer.” 

Understanding, then, is not necessarily a question of knowledge but rather of experience. Knowing what a hammer is (hence this question of what is means was the focus of Heidegger's work) says nothing about the essence of the hammer. But knowing how to use a hammer to drive a nail (or some other creative misuse) begins to get at real understanding of the hammer.

We can see how Heidegger's call for a movement away from the theoretical and toward the practical makes sense as a reaction against pure theory-driven philosophy. He wanted to show how our relationship to the world cannot be reduced to passive observation and modeling. 

At the same time Heidegger was advocating practical engagement, the West was experiencing the rapid evolution of the perversion of practicality: capitalism. This is not meant to make a moral judgement against capitalism but simply to point out that much of the productivity fetishism we see in capitalist societies is a result of the pendulum swinging popular opinion from theory to practice. In other words, it is not simply that theory is wrong but rather it is unproductive in a capitalist system in which all time must be maximized toward tangible outputs (unless you're of the upper class, in which case productivity is not necessary). 

Enter modern startup culture. The pride we see in working 16 hour days, attending 3 day hackathons without sleeping, launching launching not terribly surprising. In a certain sense, startup culture is the desire of capitalism realized: having an idea, raising capital, building it, profiting, and selling it to a bigger company. Everyone wins, right? Founders make money and the bigger company swallows another competitor, and sometimes enacts the ultimate form of capitalist osmosis: the acqui-hire.

Nor is it surprising to see how Lean methods were adopted into Lean Startup, and the all-too-common misconception that following this process will help teams move quicker and launch earlier. This view is certainly more of a desire than a reality—it is the desire for a tangible process to result in more productivity, faster. In reality, Lean and Lean Startup are more about continuous learning than rapidity. Quicker releases are simply a byproduct of the ability to make decisions based on actual evidence. 

I think it's time to re-evaluate the nature of the practical. It’s all too common to see small companies latch onto the tangibility of outputs rather than the value they bring. And all too often people think about tactical work as productive or practical, and knowledge work as theoretical and undesirable. This binary is problematic on many levels, the most obvious being that extreme fetishization of the practical is just as nonsensical as extreme fetishism of the theoretical. 



Information Cartilage

The following is an excerpt from a paper I submitted to the Journal of Information Architecture I am interested in examining how to design flexible, adaptive information spaces that can account for new waves in technological evolution such as context-aware computing and artificial intelligence. 

Information spaces are changing more rapidly than we can design for them. As context-aware computing and artificial intelligence are advancing into consumer markets, the need to structure information and meaning around adaptive systems has never been more important. The past century has seen significant variation in the ways that information is ordered, disordered, constructed, and broken down. Information architecture is about creating order, but these new technological systems are calling for a form of order that borders on paradox: a flexible structure, an adaptive constant, an information cartilage. Context awareness and artificial intelligence are making us think about information in new ways; the big question, the one this paper will hopefully help answer or at least usefully frame, is how we design for such systems. 

A large part of the modern era was spent amassing things. The Industrial Revolution gave us the means of producing consumer products in large quantities, and capitalism gave us the motivation and means of self-justification for such excess. But not only with consumer culture: art and literature also changed in the modernist era, especially high modernism, from relatively ordered realism to chaotic abstraction and purposeful transgression. What is unique about all these objects we collected is that they all existed within the bounds of physical space, at least until the mid 20th century. As information technology came into being, we saw a shift from physical objects to digital objects at the same time as we embraced disorder in art on a larger scale than ever before. It seems cliché to refer to an ‘information explosion,’ as it has become wrapped up in everyday life to the extent that we don’t notice it anymore, but we must note that the tendency toward production, coupled with the ability to transgress the bounds of physical space, has resulted in an abundance of information analogous to our abundance of physical objects. As “informational objects” became a reality, we were already conditioned to think in terms of disorder. So we collected them, cherished them, fetishized them, but we didn’t do a great job of organizing them.

The problem associated with too many physical objects compared to too much information is a question of organization and space. It seems unlikely that we will run out of space for our digital objects because we have the capacity to create more space. The challenge, however, is to organize these objects in such a way that we can maintain volume and create meaningful associations. Without organization, information becomes a burden.

Information architecture was born out of the need to organize web-based information into a network of meaningful interactions. Mobile computing allowed digital information to slip off the desktop and into the pockets of users worldwide, resulting in a staggering amount new sources, types, and potential categories of information. It created new information spaces that are not only digital but also transitory, fickle, and unpredictable. In the coming years, as artificial intelligence and contextual computing refine themselves, we are looking at yet another source of information that could prove even more transitory, fickle, and unpredictable. Information architects play a crucial role—if not the crucial role—in ensuring these informational objects will remain meaningful.

The success of this organizational project depends on our understanding of the interplay between physical and digital spaces, concentrating on two of the most interesting movements in computing, which have been gaining momentum for decades: artificial intelligence and context-aware computing. The new information spaces these movements create—i.e., adaptive spaces—call for a re-examination of how digital information relates to physical space. Using Jean Baudrillard and Martin Heidegger’s work as a basis, I argue that the ways we understand contextual and intelligent systems, and subsequently their ultimate success, depends on how their information is organized and its ability to adapt.