Pervasive 2007: Approaching Everywhere, Everyone and Everything

Pervasivebanner

I just attended my first International Conference on Pervasive Computing (Pervasive 2007), and I have to agree with many of my friends who have been to both Pervasive and UbiComp that the conferences are very similar, differing primarily with respect to the time of year they are convened. Both focus on computing technology as it migrates beyond the desktop and is increasingly on, with or near us as we navigate the physical world. Among the minor differences between Pervasive and UbiComp that I noticed are a slightly stronger emphasis on sensors and systems (vs. people and their experiences), a larger proportion of PWJs (people wearing [dress] jackets), and most of the people asking questions after presentations were rather senior members of the community (er, including me, though after raising questions about raising questions in my last blog post, I was a bit more self-conscious about raising questions at this conference). I also sensed that there was a stronger European representation, which may have some impact on one or more of the above-mentioned observations.

Everywhere
Adam Greenfield
, author of Everywhere: The Dawning Age of Ubiquitous Computing, gave a great opening keynote on the social and ethical implications of ubiquitous computing. After surveying a number of examples highlighting the increasing pervasiveness of surveillance technologies (including a toilet that can survey its, er, “input”), he invoked Bentham and Foucault, noting “the power of the surveiling gaze in society to produce docile bodies” (reminiscent of the notions of infantilism he warned against during his ETech 2007 presentation). One of his most interesting observations was that “the same presentation I illustrated with prototypes at this time last year I now illustrate with commercial product shots” … suggesting the dawn may be closer than we think.

To me, the mark of a good keynote is one that includes elements that are referenced, implicitly or explicitly, throughout the rest of the conference. Adam shared 5 principles that I and others thought about, talked about — and even asked questions about – throughout the conference:

  • default to harmlessness
  • be self-disclosing
  • be conservative of face (avoid unnecessarily embarrassing, humiliating or shaming people)
  • be conservative of time (be efficient, do not introduce unnecessary complication)
  • be deniable (as in plausible deniability or avoidable)

Adamgreenfieldatpervasive2007Questioningatpervasive1
Adam noted how risk and safety are constructed differently from one culture to another, and suggested that addressing these principles effectively will require attention to the culture and context into which technology is introduced. During his talk, I noticed that many people were taking photos of him (and likely posting them to the Flickr pervasive07 photostream). I was reminded of David Brin’s Transparent Society, and got to thinking about (and questioning [this time captured by Aras Bilgen]) how well these principles can be applied to a world filled with personal mobile surveillance technologies (cameraphones and videophones), in which snippets of activities can be captured – and shared – by anyone, not just anything – i.e., surveillance technologies will not simply be embedded in places and things (as has or had been the traditional focus of ubiquitous computing), but carried by and/or worn by people. Perhaps Adam will address this in a future book, Everyone: The Rise of Radical Transparency .

There were a number of interesting talks following Adam’s keynote. Unfortunately, with my increasingly sore elbow (for which I’ll likely be soon seeking surgical and/or injectionable intervention), my notes grew increasingly sparse over the course of the conference, and I spent a fair amount of time during sessions in hallway and email discussions with other members of the UbiComp steering committee about plans for UbiComp 2008 (about which I’ll post more later). Thus, I was not [fully] present for some talks by some friends, as well as the best paper (and runner-up) talks — essentially, not bringing all of who I am to attending the conference. So, although far from comprehensive, I’ll try to at least render comprehensible some of my notes on some of the talks that I enjoyed at the conference.

Augmenting, Looking, Pointing and Reaching Gestures to Enhance the Searching and Browsing of Physical Objects: David Merrill (MIT Media Lab, US) presented a system for augmenting natural gestures for physical world search. In one scenario, a user could wear a ring with an infrared transmitter and point at a bar-coded product label on a restore store shelf to get information about that project; lights on the shelf sensor would indicate whether the product was on a blacklist (e.g., a food containing known allergens) or whitelist. In another scenario a user wore a modified Bluetooth headset with an infrared transmitter that could be used to infer the direction in which the user was looking (e.g., at a specific part on an automobile engine), which could then be communicated to a remote expert to help guide the user to a part to be inspected.

Reach out and touch: using NFC and 2D barcodes for service discovery and interaction with mobile devices: Stavros Garzonis (University of Bath, UK), presented a comparative evaluation of different mechanisms of using physical tags as a gateway to online services. Noting a prediction that 50% of mobile phones will have Near-Field Communications (NFC) capabilities by 2009, and the growing use of 2D barcodes, he and his colleagues conducted a comparative evaluation of how easy it was for users to connect with online services using mobile cameraphones equipped with NFC readers. They performed an evaluation aimed at Adam’s principle of “be conservative of time”, and found that connecting to online services by taking photos of 2D barcodes with a mobile phone is initially easier / faster than using a mobile phone to read NFC tags, but after training, users can use either mechanism with similar ease. However, they did not address the “be conservative of face” or “default to harmlessness” principles very effectively, having found that in a field experiment, users complained about also found that users complained about embarrassing interactions, privacy concerns and security concerns using NFC tags (echoing observations Adam had made about his experience with the joint Nokia / Mastercard / Citibank trial of NFC phone-enabled PayPass in New York City: ‘it is very, very hard to use, gives confounding feedback, and is more irritating than the transaction it was intended to replace”). The study participants developed techniques to adapt to their challenges, such as setting the NFC reader to “always on” and activating the keypad lock.
They also offered suggestions, such as using the NFC phones for access control, for accessing mobile services (e.g., electronic payments, ordering a tax, finding out how long until the next bus), and for data storage, retrieval and transfer (e.g., exchanging business cards and bookmarking physical objects or locations). The experiments did not include the use of Bluetooth as a mechanism for connecting with online services; it would be interesting to include Bluetooth in future experiments, especially as the longer range may offer more opportunities for conserving face.

Combining Web, Mobile Phones and Public Displays in Large-Scale: Manhattan Story Mashup [slides]: Ville Tuulos (University of Helsinki, Finland), shared some experiences from Manhattan Story Mashup, a Nokia-sponsored event, which blended the web, mobile phones and large public displays during a 90-minute pervasive game in downtown Manhattan during the Come Out and Play Festival last September. The game attracted 165  web participants, who contributed 271 provocative sentences, from which nouns were selected and sent to the 184 mobile phone users, who were given 60 seconds to snap a photo related to the word, both of which were then sent around to other players who selected one of four possible words (including the target word) that the photo represents. If the guess was correct, the photographer and guesser both scored a point. 3142 photos were taken, and a total of 54,657 game events (including sentence submission, photo taking, guesses) were recorded during the period. In followup interviews, participants expressed appreciation for the immersiveness, fast tempo, creativity, teamwork, competition, thought provocation and freedom (through intentional ambiguity). I’m hoping the game will go on tour sometime in the future.

PersonisAD: Distributed, Active, Scrutable Model Framework for Context-Aware Services: Bob Kummerfield (University of Sydney, Australia), introduced a framework for a linking people, places, sensors and services in a scrutable way, i.e., a way that makes the what and why of context visible. Places are represented with variable resolution (“at work” vs. “Room 313” vs. lat/long coordinates), and people are represented (as part) through their preferences. Bob and his colleagues developed a music mix application that generated group playlists based on music preferences, reminding me (and others) of the MusicFX that I worked on almost 10 years ago; the PersonisAD framework enabled them code the application in 200 lines of Python (far less code than we wrote for MusicFX). I was also reminded of Quentin JonesP3 framework for connecting “people-to-people-to-geographical-places”, and wonder whether there might be synergies there.

An Exploration into Activity-Informed Physical Advertising Using PEST: Bo Begole and Kurt Partridge (PARC, US) noted that advertising is understudied in the HII (Human-Information Interaction) literature, predicted that advertising would be among the killer apps of ubicomp, invoked an economist’s perspective of value in exchanging marketing information (high or low value derived by senders and/or receivers), and highlighted the notion of “advertising as flirtation” recently articulated by James Morris during a PARC Forum presentation. Riffing on the model of advertising based on the context of users typing words into a search engine, Bo, Kurt and their colleagues explored the use of activities to establish a context for advertising. A mobile phone-based Proactive Experience Sampling Tool (PEST) was deployed to 6 people, which periodically asked them to label whatever activity they were engaged in. In one experimental condition, random advertisements were later shown to the participants, and they were asked how relevant and useful the ads would have been when they were engaged in that activity; in another, the self-reported activities were fed to a search engine, and advertisements from the results page were shown. In the most interesting and innovative condition, offering a twist on Wizard of Oz studies, the activities were fed to Amazon’s Mechanical Turk “artificial artificial intelligence” service — through which people are paid small sums of money to answer questions (providing, in effect, an inexpensive army of 100,000 prospective wizards) — along with the question of what product or service they would propose with that activity. The answers provided by the respondents (who were paid 10 cents per answer) were then fed into a search engine, and advertisements from those results pages were shown to participants. In analyzing the relevance and usefulness of these approaches, the second condition (keword) resulted in more relevant ads than the first condition (random), but the third (Mechanical Turk) approach did not show any improvement. None of the conditions seemed to generate significantly useful ads. Most interesting to me was their analysis of why Mechanical Turk didn’t prove to be a more effective approach. It turns out that the army of low-paid wizards turned out to be somewhat unreliable, often sending the same response (e.g., “drinking coke”, or [more] random responses such as “ZZ” or “a”) to any query, regardless of the activity labeled. Although the team came up with a two-tiered system for collecting 5 responses and offering those up to 10 other MT workers for votes (which, of course, turned out to be considerably more expensive, as it involved significantly more queries), it does seem that they generally got what they paid for, and demonstrated that any web 2.0 service is vulnerable to “gaming” behaviors. Still, I thought this was a really cool idea, and believe the general idea of utilizing web 2.0 services for HII user studies is vastly understudied (and may well be the basis for killer apps in ubicomp).

Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness:
Andrew Vande Moere (University of Sydney, Australia) talked about the design and use of basketball jerseys augmented with electroluminescent wires and surfaces that signify the point scored and fouls committed by the wearers, along with an indication of time remaining – wearable public displays of effectiveness (in effect). Among the interesting findings was that sports players, while public figures, did not want some data exposed (e.g., physiological data), and that the jerseys turned out to be perceived as far more useful to the non-players (coach, referee, spectators) than the players … and with spectator sports, as with other professions involving public performances, it’s often difficult (for me) to determine who the players are playing for – themselves or their audience.

Inference Attacks on Location Tracks
:
John Krumm (Microsoft Research, US) gave a characteristically engaging presentation of results of an investigation he and his colleagues had done into whether / how anonymized location tracks reveal identity, and how much location data corruption can veil identity. One of the most interesting aspects of his presentation was a survey of studies demonstrating the low cost of privacy, with respect to location data (e.g., the median price of revealing 28 days of location data collected about 74 student participants was only US$18, and participants in other studies reported being “unconcerned”) … which is good, because another interesting aspect of their work demonstrated that an algorithm for inferring home coordinates based on 2 weeks worth of GPS data, fed to a reverse lookup service, enabled them to identify between 5% and 13% of the participant’s homes (depending on what kind of service was used). They also found that a variety of obfuscation algorithms could reduce the accuracy of such home-finding “attacks” … but with a concomitant reduction in utility … and, as is so often the case with location-based applications, the utility vs. risk tradeoff will likely have to be determined on a case-by-case basis (for both specific applications and specific users). One of the questions asked after the presentation raised the issue of cultural perceptions of risk (relating to some cultural issues Adam had raised earlier); although the question focused on transnational cultural differences, I was thinking that significant intranational (?) cultural differences exist, and the irony in the USA is that I suspect many people who “flee” to the suburbs are far more vulnerable to GPS-based home-finding “attacks” than city-dwellers (due to higher-density multi-family dwellings) … and yet suburban and rural users may be one of the most likely groups to benefit from GPS-enabled location-based services.

Weight-sensitive Foam to Monitor Product Availability on Retail Shelves: 
Christian Metzger (ETH Zurich, CH) presented a low-cost approach using weight-sensitive foam to enabling a retail store shelf to take its own inventory. Among the interesting tidbits he shared were the high cost of out-of-stock conditions, which are estimated to apply to 5-10% of retail merchandise at any given time; the loss of sales due to out-of-stock products is estimated to be up to 4%, resulting in losses of between $7B to $12B in the US alone. Although some out-of-stock conditions are due to ordering problems and other factors, 38% are due to poor shelf replenishment policies and practices, with periods of out-of-stockness often lasting for many days. The work showed an unusually keen appreciation for the economic factors affecting the adoption of technologies, highlighting the costs of the pain (noted above) and the costs for the proposed solution ($6-12 per running meter of shelf space).

Assessing and optimizing the range of UHF RFID to enable real-world pervasive computing applications:

Steve Hodges (Microsoft Research, UK) presented an innovative approach to determining the range of RFID readings (a problem with which we wrestled mightily before — and during – our proactive display deployment at UbiComp 2003). Steve and his colleagues mounted a 14×14 array of tags on a robotic platform which could then be moved around a space. They were able to come up with a graph that only loosely conformed to the theoretical teardrop range boundaries typically drawn for RFID readings, but much more closely representing the kinds of noise that we typically found. Applying an attenuation thresholding method – progressively reducing the power to the antenna until no readings were detected – they were able to smooth out some of this noise to arrive at a pattern that seemed to strike a nice balance between the true readings (with noise) and the theoretical and more simple graph typically drawn for RFID. While the system worked well for optimizing RFID in applications involving tagged objects, I suspect the method would be less applicable to applications involving tagging people … as a 14×14 array of people (or 14×14 array of tags attached to a person) may introduce additional dimensions of noise (and subject impatience must be factored in).

There were lots of interesting demos and late breaking results, but I’m only going to mention two here (my elbow is getting sore again).

How much to bid in Digital Signage Advertising Auctions?: Jörg Müller (University of Muenster Institute for Geoinformatics, DE) shared some early explorations he and his colleagues are doing into a representation and set of algorithms for determining what kinds of advertisements to show on digital signs. The representation includes the number of people in front of the display, the time before an advertised event occurs, the distance to the event, and the user’s likely interest in the event. The algorithm is a second-price auction in which the bids are based on a pay-for-impression model. They have 8 screens they are testing on campus, and hoping to test their representation and algorithms on a network of digital signs in a nearby urban center.

Using the Mobile Experience Engine (MEE) to Create Locative Audio Experiences: Geoffrey Shea, Tom Donaldson, Paula Gardner (Ontario College of Art & Design, Canada) presented MEE, a software development kit for creating mobile, media-rich applications. They demonstrated one project, Alter Audio, that was developed as part of the Mobile Digital Commons Network, in which students used MEE to produce a mobile application in which each device has an associated voiced note in the key of C. When another device running the application comes within Bluetooth range, its note is added to the mix to produce a random chord, and chords each device is playing depends on the other devices in the vicinity. Seeing (and hearing) the demo created a very zen-like experience. Geoffrey and Paula told me about another deployment they’ll be doing in Toronto in the near future, CitySpeak, that also looks / sounds very interesting.

The conference concluded with an amazing array of tutorials, that I estimate collectively represented hundreds of years of expertise (or perhaps thousands of years, when one considers the experts’ students, colleagues and [other] sources of inspiration) into 8 hours. I can’t possibly do justice to all the knowledge shared in those sessions, so I’ll simply focus on a collection of words I added to my vocabulary, books I added to my Amazon wish list, and videos I added to my YouTube favorites: odometry, trilateration, potentiometer, synaesthesia, piezoresistivity, transimpedance, peaky, unremarkability, emic and etic, hermeneutic ethnography, search for signification, intrinsically incomplete and essentially contestable but plausible stories, taxonomic vs. generative, ethnotechnical, poly-vocal or multiperspectival, transculturalism, "The whole purpose for ubiquitous computing, of course, are the applications" (Mark Weiser), formative vs. summative evaluation, generative vs. discriminative classifiers, maximal margins, middleware as "software sold to people who don’t know how to program by people who know how to program", semantic multiplicity, Herbert Clark’s Arenas of Language Use, Videos of automatic doors from Star Trek and Japan. A book will likely be published in the near future that will contain a more erudite elaboration on the wisdom from that session, but that’s about all I can manage at the moment.

[Update: Mark Medley has published a great article summarizing some of the highlights from the conference in the National Post: "Sensors to track your kids and more: The third wave of computing"]


by

Tags:

Comments

3 responses to “Pervasive 2007: Approaching Everywhere, Everyone and Everything”

  1. Anne Avatar

    Thanks so much for posting these notes Joe! I’ve read through them twice and checked out a handful of links and have to admit that I’m more than a bit disappointed to see so much technological determinism after all these years.
    Now, I could have said technocentric which is also true, but it would miss an important point: this all seems to assume a steady march of technological progress with little resistance or perversion by actual people. With this also comes an assumption that if/when this tech reaches market it’s either “game-on!” or “game-over!” as if the market decides such things separately from us. The advertising papers really needed more of a critical perspective on this.
    I mean of course culture plays a part (duh!) but we need to do it more than lip service – just like we need to expand our understanding of context and power relations. And we really really need more empirical research.
    But that’s quite enough ranting 😉
    I quite like the NFC study because it openly acknowledges the limits of the technology as well as how study participants routed around it. The challenge, of course, will be to see what other researchers and designers learn from this… As for me, well I’m still waiting for the conference where critical theory and relentless empiricism meet ubicomp!

  2. Joe Avatar

    Anne: thank you for your characteristically insightful observations! Your name (and work) came up a few times in discussions during the conference, and I’m not at all surprised by your reaction. I do believe you are spot on with respect to the determinism underlying much of the work presented at the conference. However, I do think there were other counterexamples, e.g., the PEST paper by PARC highlighted the perversions or unanticipated consequences that can arise in opening up to a Web 2.0-style participatory system, and believe that the Manhattan Story Mashup was also a rather indeterminate undertaking.
    Having just finished the UbiComp PC meeting, I do think there will be at least one paper at UbiComp 2007 that you will enjoy … and perhaps a workshop on the topic of ubicomp technologies, cultures and power relations would be a good candidate for some future UbiComp or Pervasive conference … if we could only find the right organizer(s) :-).

  3. nicolas Avatar

    Joe, I’d be happy to discuss once about your perspective on Herbert Clark’s work (given the fact that I employed it in my work).
    I’ll try to be at ubicomp 2007