Timo Honkela, Chief Scientist of the Adaptive Informatics Research Centre at the Helsinki University of Technology, and head of the Computational Cognitive Systems group there, visited Nokia Research Center, Palo Alto, today, and presented a talk on Social Artificial Intelligence and Mobile Devices. Timo stressed the importance of viewing intelligence in a social context (vs. an individual context), and drew upon an emergentist perpsective, with a linguistic bias, recognizing that every word we use reflects each person’s different experience, different models and different language. The biggest takeaway for me, having not paid much attention to artificial intelligence for the past 10 years or so, was Timo’s emphasis on the notion of a subjective vs. objective representation of reality, where meaning is negotiated through co-constructing shared models of the world.
Timo proposed an approach in which self-organizing maps are created through [the simulation of] a community of interacting agents, each with different perceptions, models and linguistic constructs regarding the world. He alluded to imitation and flocking behaviors, and how simple agents (e.g., birds) use strategies of separation, alignment and cohesion in order to maintain some semblance of order and make forward progress, and provided examples of projects in which camera[phone]s were used to collect images and labels to create or negotiate a shared understanding of some portion the world. Examples include Pockets Full of Memories (in which museum visitors took photos of items from their pockets and associated labels with them), Manhanattan Story Mashup (in which players received labels via mobile phone and had to take photos to correspond to those labels) and the Human Speechome Project (in which a birds eye view of large portions of the audio and visual stimuli in the first three years of an infant’s life will be captured and analyzed).
Timo noted that the key to [understanding] intelligence is action — and interaction. Pertti Huuskonen suggested that the key is in the repairs — detecting misunderstandings in interactions, and retrying (or trying new communicative actions). As a refuge (if not refuse) from artificial intelligence, this seems like a[nother] collection of AI-complete problems, but I have to say that the way that Timo framed the problem was more appealing than some of the more objective formulations I typically associate[d] with the field when I was more immersed in such issues, and suspect that pursuing this approach will yield useful results for social computing, if not artificial intelligence.
[Speaking of which, I see there is a Special Track on Social Semantic Collaboration (SEMSOC2007) at the upcoming 20th International Florida Artificial Intelligence Research Society Conference (FLAIRS 2007), with a submission deadline of Monday, November 20.]

Comments
One response to “Social Artificial Intelligence: Objective vs. Subjective Reality (and Representation)”
Fuzzy thinking in Berkeley
I was in Berkeley yesterday, so checked out Soda Hall to see what interesting research may be going on there relating to the Semantic Web. I asked a few people I met, but the term did not ring a clear bell. So I strolled around and looking at the wall I f