InterestMap: Harvesting Social Network Profiles for ...
What Would They Think?
A Computational Model of Personal AttitudesInterestMap: Harvesting Social Network Profiles
for Recommendations
Hugo Liu
MIT Media Laboratory
20 Ames St., Cambridge, MA, USA
hugo@media.mit.edu
Pattie MaesPattie Maes
MIT Media Laboratory
20 Ames St., Cambridge, MA, USA
pattiepattie@media.mit.edu
ABSTRACT
While most recommender systems continue to gather detailed models of their “users” within their particular application domain, they are, for the most part, oblivious to the larger context of the lives of their users outside of the application. What are they passionate about as individuals, and how do they identify themselves culturally? As recommender systems become more central to people’s lives, we must start modeling the person, rather than the user.
In this paper, we explore how we can build models of people outside of narrow application domains, by capturing the traces they leave on the Web, and inferring their everyday interests from this. In particular, for this work, we harvested 100,000 social network profiles, in which people describe themselves using a rich vocabulary of their passions and interests. By automatically analyzing patterns of correlation between various interests and cultural identities (e.g. “Raver,” “Dog Lover,” “Intellectual”), we built InterestMap, a network-style view of the space of interconnecting interests and identities. Through evaluation and discussion, we suggest that recommendations made in this network space are not only accurate, but also highly visually intelligible – each lone interest contextualized by the larger cultural milieu of the network in which it rests.
Keywords
User modeling, person modeling, recommender systems, item-item recommendation, social networks, collaborative filtering, cultural visualization.
INTRODUCTION
Copyright is held by the author/owner(s).
Workshop: Beyond Personalization 2005
IUI'05, January 9, 2005, San Diego, California, USA
Recommender systems (cf. Resnick & Varian, 1997) have thus far enjoyed remarkable practical and commercial success. They have become a mainstay of e-commerce sites such as Amazon and Ebay for product recommendation; and recommenders have also been deployed to cater to subject domains such as books, music, tutoring, movies, research papers, and web pages. Most recommenders operate within a single application domain, and are powered by domain-specific data – either through explicitly given user profiles, or through implicitly gathered models of user behavior within the application framework. But why should recommenders be restricted to data gathered within the context of the application?
Enter the Web. Web-based communities are quite social and dynamic places – there are online chat forums, blogging and journaling sites, “rings” of personal web pages, and social network communities. In all of these communities, recommendations are happening “in the wild,” all of the time. With natural language processing and a bit of finesse, we might hope to harvest information from these sources and use them to construct richer models of people, of communities and their cultures, and to power new kinds of recommender systems whose recommendations are sourced from online trends and word-of-mouth. The idea that recommendations could be sourced from traces of social activity follows from Terveen & Hill (2001), who refer to their approach as social data mining. They have looked at mining web page recommendations from Usenet messages, and through the structural analysis of web pages.
In this work, we turn to web-based social networks such as Friendster[1], Orkut[2], and MySpace[3] as a source of recommendations for a broad range of interests, e.g. books, music, television shows, movies, sports, foods, and more. On web-based social networks, people not only specify their friends and acquaintances, but they also maintain an explicit, self-crafted run-down of their interests and passions, inputted through free-form natural language. Having harvested 100,000 of these social network profiles, we apply natural language processing to ground interests into vast ontologies of books, music, movies, etc. We also mine out and map a category of special interests, called “passions,” into the space of social and cultural identities (e.g. “Book Lover,” “Raver,” “Rock Musician”). By analyzing patterns of how these interests and identities co-occur, we automatically generated a network-style “map” of the affinities between different interests and identities, which we call an InterestMap. By spreading activation over the network (Collins & Loftus, 1975), InterestMap can be applied directly to make interest recommendations; due to InterestMap’s unique network topology, we show that recommendations produced by this method incorporates factors of identity and taste. Outside of recommendation, we are also exploring other applications for InterestMap, such as marketing and matchmaking.
This paper is structured as follows. First, we discuss the source and nature of the corpus of social network profiles used to build InterestMap. Second, we outline the approach and implementation of InterestMap. Third, we describe how InterestMap may be used for recommendations, and we present an evaluation of the system’s performance under this task. Fourth, we give further discussion for some lingering issues – the tradeoffs involved in using social network profiles to drive recommendations; and the implications of InterestMap’s network-style representation for explainability and trust. We conclude with a greater vision for our work.It powers product recommendations on e-commerce sites like
Recommender systems have been deployed and researched for a variety of s
* the web is rich with data to mine.. rich in resources on people which can help us construct models of people, models of communities and culture, and the relationships between people and things of interest to people.
* on social networks, people not only specify their friends and acquaintances, but also give an explicit, self-crafted run-down of their interests and passions.
* recommender systems often ask users to give an explicit user profile. Here, we have the data for free, and people are otherwise motivated to maintain their profiles. Plus, the input is not restricted by a small set of choices as is sometimes given. The realms of interests which can be expressed is quite broad. We put the onus on a natural language processing subsystem to map the natural language fragments into a vast ontology of interests.
* we built interestmap, a network-style representation of the affinities between different interest and identities. It can be used directly for recommendation.. other applications in marketing (such as the recognition of identity type), online dating
* the paper is structured as follows:
SOCIAL NETWORK PROFILES
The recent emergence and popularity of web-based social network software (cf. boyd, 2004; Donath & boyd, 2004) such as Friendster, Orkut, and MySpace can be seen as a tremendous new source of subject domain-independent user models, which might be more appropriately termed, person models to reflect their generality. To be sure, well over a million self-descriptive personal profiles are available across different web-based social networks.
While each social network’s profile has an idiosyncratic representation, the common denominator across all the major web-based social networks we have examined is a category-based representation of a person’s broad interests, with the most common categories being music, books, movies, television shows, sports, and foods. Within each interest category, users are generally unrestricted in their input, but typically enumerate lists of items, given as fragments of natural language. Even within a particular category, these items may refer to different things; for example, under “books,” items may be an author’s last name, a book’s title, or some genre of books like “mystery novels,” so there may be some inference necessary to map these natural language fragments into a normalized ontology of items. Figure 1 shows the structure and contents of a typical profile on the Orkut social network.
[pic]
Figure 1. A screenshot of a typical profile taken from the Orkut social network. The interest categories shown here are typical to most web-based social network profile templates.
Note also in Figure 1, that there is a special category of interests called “passions.” Among the social network profile templates we have examined, all of them have this special category, variously called “general interests,” “hobbies & interests,” or “passions.” Furthermore, this special category always appears above the more specific interest categories, as it does in Figure 1, perhaps to encourage the thinking that these passions are more general to a person than other sorts of interests, and is more central to one’s own self-concept and self-identification.
When mining social network profiles, we distinguish passions from other categories of more specific interests. With the hypothesis that passions speak directly to a person’s social and cultural identity, we map the natural language items which appear under this category into an ontology of identity descriptors. For example, “dogs” maps into “Dog Lover,” “reading” maps into “Book Lover”, “deconstruction” maps into “Intellectual.” Items in the other categories are mapped into their respective ontologies of interest descriptors.
In the following section, we describe how these profiles were harvested, normalized and correlated to build InterestMap. * copy and paste that section here
* be certain to define “identity descriptors” (where it is located in a profile) and “interest descriptors”
* include the Diderot effect on identity and taste
* compress the advantages and disadvantages, taking out (but hinting) at the role of identity and taste
THE INTERESTMAP APPROACH
The general approach we took to build InterestMap consists of four steps: 1) mine social network profiles; 2) extract out a normalized representation by mapping casually-stated keywords and phrases into a formal ontology of interest descriptors and identity descriptors; 3) augment the normalized profile with metadata to facilitate connection-making (e.g. “War and Peace” also causes “Leo Tolstoy,” “Classical Literature,” and other metadata to be included in the profile, at a discounted value of 0.5, for example); and 4) apply a machine learning technique to learn the semantic relatedness weights between every pair of descriptors. What results is a gigantic semantic network whose nodes are identity and interest descriptors, and whose numerically weighted edges represent strengths of semantic relatedness. Below, we give an implementation-level account of this process.
1 Building a Normalized Representation
Between January and July of 2004, we mined 100,000 personal profiles from two web-based social network sites, recording only the contents of the “passions” category and common interest categories, as only these are relevant to InterestMap. We chose two social networks rather than one, to attempt to compensate for the demographic and usability biases of each. One social network has its membership primarily in the United States, while the other has a fairly international membership. One cost to mining multiple social networks is that there is bound to be some overlap in their memberships (by our estimates, this is about 15%), so these twice-profiled members may have disproportionately greater influence on the produced InterestMap.
To normalize the representation of each profile, we implemented 2,000 lines of natural language processing code in Python. First, for each informally-stated list of interests, the particular style of delimitation had to be heuristically recognized. Common delimiters were commas, semicolons, character sequences (e.g. “ \…/”), new lines, commas in conjunction with the word “and,” and so on. A very small percentage of these “lists” of interests were not lists at all, so these were discarded.
The newly segmented lists contained casually-stated keyphrases referring to a variety of things. They refer variously to authorship like a book author, a musical artist, or a movie director; to genre like “romance novels,” “hip-hop,” “comedies,” “French cuisine”; to titles like a book’s name, an album or song, a television show, the name of a sport, a type of food; or to any combination thereof, e.g. “Lynch’s Twin Peaks,” or “Romance like Danielle Steele.” To further complicate matters, sometimes only part of an author’s name or a title is given, e.g. “Bach,” “James,” “Miles,” “LOTR,” “The Matrix trilogy.” Then of course, the items appearing under “passions,” can be quite literally anything.
For a useful InterestMap, it is not necessary to be able to recognize every item, although the greater the recognition capability, the more useful will be the resulting InterestMap. To recognize the maximal number and variety of items, we created a vast formal ontology of 21,000 interest descriptors and 1,000 identity descriptors compiled from various comprehensive ontologies on the web for music, sports, movies, television shows, and cuisines, including The Open Directory Project[4], the Internet Movie Database[5], TV Tome[6], Wikipedia[7], All Music Guide[8], and AllRecipes[9]. The ontology of 1,000 identity descriptors required the most intensive effort to assemble together, as we wanted them to reflect the types of passions talked about in our corpus of profiles; this ontology was taken mostly from The Open Directory Project’s hierarchy of subcultures and hobbies, and finished off with some hand editing. To facilitate the classification of a “passions” item into the appropriate identity descriptor, each identity descriptor is annotated with a bag of keywords which were also mined out, so for example, the “Book Lover” identity descriptor is associated with, inter alia, “books,” “reading,” “novels,” and “literature.” To assist in the normalization of interest descriptors, we gathered aliases for each interest descriptor, and statistics on the popularity of certain items (most readily available in The Open Directory Project) which could be used for disambiguation (e.g. “Bach” ( “JS Bach” or ( “CPE Bach”?).
Using this crafted ontology of 21,000 interest descriptors and 1,000 identity descriptors, the heuristic normalization process successfully recognized 68% of all tokens across the 100,000 personal profiles, committing 8% false positives across a random checked sample of 1,000 mappings. We suggest that this is a good result considering the difficulties of working with free text input, and enormous space of potential interests and passions. Once a profile has been normalized into the vocabulary of descriptors, they are expanded using metadata assembled along with the formal ontology. For example, a book implies its author, and a band implies its musical genre. Descriptors generated through metadata-association are included in the profile, but at a discount of 0.5 (read: they only count half as much). The purpose of doing this is to increase the chances that the learning algorithm will discover latent semantic connections.
2 Learning the Map of
Interests and Identities
From these normalized profiles, we wish to learn the overall strength of the semantic relatedness of every pair of descriptors, across all profiles, and use this data to build InterestMap’s network graph. Our choice to focus on the similarities between descriptors rather than user profiles reflects an item-based recommendation approach such as that taken by Sarwar et al. (2001).
Technique-wise, the idea of analyzing a corpus of profiles to discover a stable network topology for the interrelatedness of interests is similar to how latent semantic analysis (Landauer, Foltz & Laham, 1998) is used to discover the interrelationships between words in the document classification problem. For our task domain though, we chose to apply an information-theoretic machine learning technique called pointwise mutual information (Church et al., 1991), or PMI, over the corpus of normalized profiles. For any two descriptors f1 and f2, their PMI is given in equation (1).
[pic] (1)
Looking at each normalized profile, the learning program judges each possible pair of descriptors in the profile as having a correlation, and updates that pair’s PMI.
What results is a 22,000 x 22,000 matrix of PMIs. After filtering out descriptors which have a completely zeroed column of PMIs, and applying thresholds for minimum connection strength, we arrive at a 12,000 x 12,000 matrix (of the 12,000 descriptors, 600 are identity descriptors), and this is the complete form of the InterestMap. Of course, this is too dense to be visualized as a semantic network, but we have built less dense semantic networks from the complete form of the InterestMap by applying higher thresholds for minimum connection strength. Figure 2 is a visualization of a simplified InterestMap.
[pic]
Figure 2. A screenshot of an interactive visualization program, running over a simplified version of InterestMap (weak edges are discarded, and edge strengths are omitted). The “who am i?” node is an indexical node around which a person is “constructed.” As interests are attached to the indexical, correlated interests and identity descriptors are pulled into the visual neighborhood.
3 Network Topology
Far from being uniform, the resultant InterestMap has a particular topology, characterized by two confluence features: identity hubs, and taste cliques.
Identity hubs are identity descriptor nodes which behave as “hubs” in the network, being more strongly related to more nodes than the typical interest descriptor node. They exist because the ontology of identity descriptors is smaller and less sparse than the ontology of interest descriptors; each identity descriptor occurs in the corpus on the average of 18 times more frequently than the typical interest descriptor. In InterestMap, identity hubs serve an indexical function. They give organization to the forest of interests, allow interests to cluster around identities. What kinds of interests do “Dog Lovers” have? This type of information is represented explicitly by identity hubs.
Another confluence feature is a taste clique. Visible in Figure 2, for example, we can see that “Sonny Rollins,” is straddling two cliques with strong internal cohesion. While the identity descriptors are easy to articulate and can be expected to be given in the special interests category of the profile, tastes are often a fuzzy matter of aesthetics and may be harder to articulate using words. For example, a person in a Western European taste-echelon may fancy the band “Stereolab” and the philosopher “Jacques Derrida,” yet there may be no convenient keyword articulation to express this. However, when the InterestMap is learned, cliques of interests seemingly governed by nothing other than taste clearly emerge on the network. One clique for example, seems to demonstrate a Latin aesthetic: “Manu Chao,” “Jorge Luis Borges,” “Tapas,” “Soccer,” “Bebel Gilberto,” “Samba Music.” Because the cohesion of a clique is strong, taste cliques tend to behave much like a singular identity hub, in its impact on network flow.
In the following section, we discuss how InterestMap may be used for recommendations, and evaluate the impact that identity hubs and taste cliques have on the recommendation process.* copy and paste the first paragraph from “interest map approach” and the implementation section
* give some statistics on the size of it
* talk about taste-cliques and identity hubs.. hub-and-spoke network topology.
USING INTERESTMAP FOR RECOMMENDATIONS
InterestMap can be applied in a simple manner to accomplish several tasks, such as identity classification, interest recommendation, and interest-based matchmaking. A unique feature of InterestMap recommendations over straight interest-item to interest-item recommendations is the way in which identity and tastes are allowed to exert influence over the recommendation process. The tail end of this section describes an evaluation which demonstrates that identity and taste factors can improve performance in an interest recommendation task.
1 Finding Recommendations
by Spreading Activation
Given a seed profile which represents a new user, the profile is normalized into the ontology of interest descriptors and identity descriptors, as described in Section 3.1. The normalized profile is then mapped onto the nodes of the InterestMap, leading to a certain activation pattern of the network.
With InterestMap, we view interest recommendation as a semantic context problem. By spreading activation (Collins & Loftus, 1975) outward from these seed nodes, a surrounding neighborhood of nodes which are connected strongly to the seed nodes emerges. As the distance away from the seed nodes increases (in the number of hops away), activation potential decays according to some discount (we having been using a discount of 0.75). The semantic neighborhood defined by the top N most related interest descriptor nodes corresponds with the top N interest recommendations produced by the InterestMap recommender.
Another straightforward application of InterestMap is identity classification. A subset of the semantic neighborhood of nodes resulting from spreading activation will be identity descriptor nodes, so the most proximal and strongly activated of these can be thought of as recognized identities. Identity classification with InterestMap can be useful in marketing applications because it allows a distributed interest-based representation of a person to be summarized into a more concise demographic or psychographic grouping.
Finally, we are experimenting with InterestMap for interest-based matchmaking, which may be useful for making social introduction. To calculate the affinity between two people, two seed profiles lead to two sets of network activations, and the strength of the contextual overlap between these two activations can be used as a coarse measure of how much two people have in common. * talk about spreading activation (second paragraph from “interest map approach)
* spreading activation is different from simply adding mutual informations because the recommendations “flow” through taste-cliques and identities.. when that happens, taste-cliques and identities have a disproportionate effect on recommendation.
2 Evaluation
We evaluated the performance of spreading activation over InterestMap in the interest recommendation task. In this evaluation, we introduced three controls to assess two particular features: 1) the impact that identity hubs and taste cliques have on the quality of recommendations; and 2) the effect of using spreading activation rather than a simple tally of PMI scores. In the first control, identity descriptor nodes are simply removed from the network, and spreading activation proceeds as usual. In the second control, identity descriptor nodes are removed, and n-cliques[10] where n>3 are weakened[11]. The third control does not do any spreading activation, but rather, computes a simple tally of the PMI scores generated by each seed profile descriptor for each of the 11,000 or so interest descriptors. We believe that this successfully emulates the mechanism of a typical non-spreading activation item-item recommender because it works as a pure information-theoretic measure.
We performed five-fold cross validation to determine the accuracy of InterestMap in recommending interests, versus each of the three control systems. The corpus of 100,000 normalized and metadata-expanded profiles was randomly divided into five segments. One-by-one, each segment was held out as a test corpus and the other four used to either train an InterestMap using PMI.
Within each normalized profile in the test corpus, a random half of the descriptors were used as the “situation set” and the remaining half as the “target set.” Each of the four test systems uses the situation set to compute a complete recommendation— a rank-ordered list of all interest descriptors; to test the success of this recommendation, we calculate, for each interest descriptor in the target set, its percentile ranking within the complete recommendation list. The overall accuracy of recommendation is the arithmetic mean of the percentile scores generated for each interest descriptor of the target set.
We opted to score the accuracy of a recommendation on a sliding scale, rather than requiring that descriptors of the target set be guessed exactly within n tries because the size of the target set is so small with respect to the space of possible guesses that accuracies will be too low and standard errors too high for a good performance assessment. For the InterestMap test system and control test systems #1 (Identity OFF) and #2 (Identity OFF and Taste WEAKENED), the spreading activation discount was set to 0.75). The results of five-fold cross validation are reported in Figure 3.
[pic]
Figure 3. Results of five-fold cross-validation of InterestMap and three control systems on a graded interest recommendation task.
The results demonstrate that on average, the original InterestMap recommended with an accuracy of 0.86. In control #1, removing identity descriptors from the network not only reduced the accuracy to 0.81, but also increased the standard error by 38%. In control #2, removing identity descriptors and weakening cliques further deteriorated accuracy slightly, though insignificantly, to 0.79. When spreading activation was turned off, neither identity hubs nor taste cliques could have had any effect, and we believe that is reflected in the lower accuracy of 73%. However, we point out that since control #3’s standard error has not worsened, its lower accuracy should be due to overall weaker performance across all cases rather than being brought down by exceptionally weak performance in a small number of cases.
We believe that the results demonstrate the advantage of spreading activation over simple one-step PMI tallies, and the improvements to recommendation yielded by identity and taste influences. Because activation flows more easily and frequently through identity hubs and taste cliques than through the typical interest descriptor node, the organizational properties of identity and taste yield proportionally greater influence on the recommendation process; this of course, is only possible when spreading activation is employed.* for one baseline, we do not “flow” and we removed the identity nodes. Thus, we have eliminated the influences of identities and taste-cliques on recommendations. This control emulates the behavior of a very basic item-item recommender
* compare to interestmap
DISCUSSION
In this section, we discuss some of the advantages and disadvantages of using social network profiles to drive recommendations, and implications of InterestMap’s network-style view of a space for trust and explainability.
1 Tradeoffs in Using Social Network
Profiles to Drive Recommendations
The harvesting of social network profiles for recommendations involves several important tradeoffs to be considered.
Fixed Ontology versus Open-ended Input. While domain-specific behavior-based recommenders model user behavior over a predetermined ontology of items (e.g. a purchase history over an e-commerce site’s ontology of products; a rating history over an application’s ontology of movies), items specified in a social network profile are open-ended. Granted that in the normalization of a profile, items will have to be eventually normalized into an ontology, there still remains the psychological priming effects of a user working over the artifacts of a fixed ontology as he/she is composing ratings. For example, in a movie domain, a user may choose to rate a movie because of the way the movies are browsed or organized, and may find movies to rate which the user has long forgotten and is surprised to see in the ontology. In filling out the movies category in a social network profile, there is no explicit display of a movie ontology to influence user input, and a user could definitely not input movies which he/she has long since forgotten.
This generates the following tradeoff: Recommenders based on domain-specific behaviors will be able to recommend a greater variety of items than open-ended input based recommenders, including the more obscure or not entirely memorable items, because the application’s explicit display of those items will remind a user to rate them. On the other hand, open-ended input may tend to recommend items which are more memorable, more significant, or possessing greater communicative value. This is especially true for social network profiles, where users have an explicit intention to communicate who they are through each of the interests descriptors they specify. We suggest that high-communicative value adds a measure of fail-softness to recommendations. For example it might be easier to rationalize or forgive the erroneous recommendation of a more prominent item like “L.v. Beethoven’s Symphony No. 5” to a non-classical-music-lover than an equally erroneous recommendation of a more obscure or arbitrary feature like “Max Bruch’s Op. 28.”
Socially Costly Recommendation. The social cost paid by a user in producing a “rating” can greatly affect the quality and nature of recommendations. To begin with, in some domain-specific behavior-based recommender systems, the profile of user behavior is gathered implicitly and this profile is kept completely private. Here there is no cost paid by a user in producing a “rating.” In a second case, some domain-specific recommender systems make their users’ ratings publicly viewable. The introduction of the publicity dimension is likely to make a user more conscious about the audience for the ratings, and more careful about the ratings that he/she produces; thus, the user is paying some social cost, and as a result, we might expect the ratings to be less arbitrary than otherwise. Employing this same logic for monetary cost, we might expect Amazon recommendations based on purchases to be less arbitrary that recommendations based on products viewed.
Thirdly, in the case of social network profiles, the greatest cost is paid by a user in listing an item in his/her profile. Not only is the profile public, but it is viewed by exactly the people whose opinions the user is likely to care about most – his/her social circle; Donath & boyd (2004) report that a person’s presentation of self in profiles is in fact a strategic communication and social signalling game. Items chosen for display are not just any subset of possessed interests, but rather, are non-arbitrary items meant to be representative of the self; furthermore, users may consciously intend to socially communicate those items to their social circle.
The social cost dimension to recommendation produces another interesting tradeoff. The higher the social cost paid by the user in producing a rating, the more deliberate the ratings. So we can anticipate recommendations made using this data to be consistent in their social apropos. On the other hand, social stigma will tend to suppress the rating and recommendation of malapropos items, for instance, perhaps the cost of listing the publicly-derided but oft privately-appreciated “Britney Spears” in one’s profile is prohibitively high. Of course, these social pressures also manifest in real-life social recommendations, and the thought of recommending “Britney Spears” to someone you are not very comfortable with may be just as dissuasive.
2 Impact of Network-Style Views
on Explainability and Trust
That a user trusts the recommendations served to him by a recommender system is important if the recommender is to be useful and adopted. Among the different facilitators of trust, Wheeless & Grotz (1977) identify transparency as a prominent desirable property. When a human or system agent discloses its assumptions and reasoning process, the recipient of the recommendation is likely to feel less apprehensive toward the agent and recommendation. Also in the spirit of transparency, Herlocker et al. (2000) report experimental evidence to suggest that recommenders which provide explanations of its workings experience a greater user acceptance rate than otherwise.
Unlike opaque statistical mechanisms like collaborative filtering (Shardanand & Maes, 1995), InterestMap’s mechanism for recommendation can be communicated visually as a large network of interests and identities. The cliques and idiosyncratic topology of this fabric of interests visually represents the common tendencies of a large group of people. For example, in Figure 2, it is plain to see that “Sonny Rollins” and “Brian Eno” are each straddling two different cliques of different musical genres. The rationale for each recommendation, visually represented as the spreading of flow across the network, is easily intelligible. Thus it may be easier for a user to visually contextualize the reasons for an erroneous recommendation, e.g. “I guess my off-handed taste for Metallica situated me in a group of metal heads who like all this other stuff I hate.”
The ability to interact with the InterestMap network space may also afford the system an opportunity to learn more intelligently from user feedback about erroneous recommendations. Rather than a user simply stating that she did not like a particular recommendation, she can black out or deprecate particular clusters of the network which she has diagnosed as the cause of the bad recommendation, e.g. “I’ll black out all these taste cliques of heavy metal and this identity hub of “Metal Heads” so the system will not make that mistake again.” Although we have not yet implemented such a capability in InterestMap, we hope to do so shortly.* EXPLANABILITY AND TRUST are important issues;
* the quality of items: many domain-specific rating systems have features which are fixed.. and actually, completeness of these things could be a pathology… vs. social network profiles are open-ended
* in social network profiles, there is a social cost to the presentation of self.. each item in the profile, the person is staking his reputation on it. and because these are people whose opinions the self cares about, it is perhaps the most costly of public rating systems
* of course, one problem with publicity is that negative knowledge is less socially acceptable and expressible so that is ommitted, compared with private rating systems
* however, google is a public rating system and it works just fine!
* the good from this: high social cost is that these recommendations have a very social nature: they are what a friend might recommend: each recommendation is backed by some intention. This may help the defensibility of recommendations…
* also, the network-style display of community tendencies can help as explanation.
* THE INFLUENCE OF IDENTITY HUBS AND TASTE CLIQUES on recommendation, add a hierarchical organization to items, just as GENRE and other categories function in feature-based recommenders. Except they are more granular, and taste-cliques don’t generally have easily articulated names. Identity hubs and taste cliques are machine discovered clusterings, rather than a product of editorial invention.
CONCLUSION
As recommender systems play ever-larger roles in people’s lives, providing serendipitous suggestions of things to do and people to meet, recommendation technology will have to be based on something other than domain-specific knowledge, which is facing a semantic interoperability crisis. To some degree, we will have to abandon user modeling in favor of person modeling, and cultural modeling. We hope that the work presented in this paper begins to illustrate a path in this direction. By harvesting the traces of how people behave in the wild on the Web and on their computers, we can build a more general model of their person. By looking at interests within the context of emergent cultural patterns, we find new bases for recommendation, driven by cultural identities, and modes of taste. And the best part of this new paradigm for recommendation is that it would be more intelligible and transparent to people, for we, as persons, are already well-equipped to understand interests in the context of cultural milieus.
ACKNOWLEDGMENTS
The authors would like to thank the blind reviewers for their helpful feedback, and Judith Donath for inspiring our social perspective on this work.
REFERENCES
1] danah boyd: 2004, Friendster and publicly articulated social networks. Conference on Human Factors and Computing Systems (CHI 2004). ACM Press.
2] K.W. Church, W. Gale, P. Hanks, and D. Hindle: 1991, Using statistics in lexical analysis. In Uri Zernik (ed.), Lexical Acquisition: Exploiting On-Line Resources to Build a Lexicon, pp. 115-164. New Jersey: Lawrence Erlbaum, 1991
3] A.M. Collins, and E.F. Loftus: 1975, A spreading-activation theory of semantic processing. Psychological Review, 82, pp. 407-428
4] Judith Donath and danah boyd: 2004, Public displays of connection, BT Technology Journal 22(4)
5] Mihayi Csikszentmihalyi: 1997, Finding flow: the psychology of engagement with everyday life. 1st ed. MasterMinds. 1997, New York: Basic Books.:pp. 71-82. Kluwer
6] J. Herlocker, J. Konstan and J. Riedl: 2000, Explaining Collaborative Filtering Recommendations. Conference on Computer Supported Cooperative Work, pp. 241-250
7] T.K. Landauer, P.W. Foltz, D. Laham: 1998, An introduction to Latent Semantic Analysis . Discourse Processes, 25, 259-284.
8] P. Resnick and H.R. Varian: 1997, Recommender Systems. Communications of the ACM, Vol. 40(3):pp. 56-58.
9] David A. Kolb: 1985, Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall.
10] B.M. Sarwar et al.: 2001, Item-Based Collaborative Filtering Recommendation Algorithms. 10th Int’l World Wide Web Conference, ACM Press, pp. 285-295.
11] U. Shardanand and P. Maes: 1995, Social information filtering: Algorithms for automating `word of mouth'. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 210-217.
12] L.G. Terveen and W.C. Hill: 2001, Beyond Recommender Systems: Helping People Help Each Other, in Caroll, J. (ed.), HCI In The New Millennium. Addison-Wesley
13] L. Wheeless and J. Grotz: 1977, The Measurement of Trust and Its Relationship to Self-disclosure. Communication Research 3(3), pp. 250-257.
14] ABSTRACT
Understanding the personalities and dynamics of an online community empowers the community’s potential and existing members. This task has typically required a considerable investment of a user’s time combing through the community’s interaction logs. This paper introduces a novel method for automatically modeling and visualizing the personalities of community members in terms of their individual attitudes and opinions.
“What Would They Think?” is an intelligent user interface which houses a collection of virtual representations of real people reacting to what a user writes or talks about (e.g. a virtual Marvin Minsky may show a highly aroused and disagreeing face when you write “formal logic is the solution to commonsense reasoning in A.I.). These “digital personas” are constructed automatically by analyzing personal texts (weblogs, instant messages, interviews, etc. posted by the person being modeled) using natural language processing techniques and commonsense-based textual-affect sensing.
Evaluations of the automatically generated attitude models are very promising. They support the thesis that the whole application can help a person form a deep understanding of a community that is new to them by constantly showing them the attitudes and disagreements of strong personalities of that community.
Categories and Subject Descriptors
H.5.2 [Information Interfaces and Presentation]: User Interfaces – interaction styles, natural language, theory and methods, graphical user interfaces (GUI); I.2.7 [Artificial Intelligence]: Natural Language Processing – language models, language parsing and understanding, text analysis.
General Terms
Algorithms, Design, Human Factors, Languages, Theory.
Keywords
Affective interfaces, memory, online communities, natural language processing. commonsense reasoning.
INTRODUCTION
15]
16] Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
17] IUI’04, January 13-16, 2004, Island of Madeira, Portugal.
18] Copyright 2004 ACM.
19]
20] Entering an online community for the first time can be intimidating if a person does not understand the dynamics of the community and the attitudes and opinions espoused by its members. Right now, there seems to only be one option for these first-time entrants – to comb through the interaction logs of the community for clues about people’s personalities, attitudes, and how they would likely react to various situations. Picking up on social and personal cues, and overgeneralizing these cues into personality traits, we begin to paint a picture of a person so lucid that we seem to be able to converse with that person in our heads. Gaining understanding of the community in this manner is time consuming and difficult, especially when the community is complex. For the less dedicated, more casual community entrant, this approach would be undesirable.
21] [pic]
22] Figure 1. Virtual personas representing members of the AI community react to typed text. Each virtual persona’s affective reactions are visualized by modulating graphical elements of the icon.
23] In our research, we are interested in giving people at-a-glance impressions of the attitudes of people in an online community so that they can more quickly and deeply understand the personalities and dynamics of the community.learn-by-doing; often called “experiential learning” in the educational psychology literature, many such as Kolb have touted this dynamic, explorative, generate-and-test approach as necessary to developing real intuition about a subject (Kolb, 1985).
24] The problem of getting a person to fully engage on a task without extraneous demands of form is often referred to as the engagement problem, and a well-known theory which addresses it is Csikzentmihalyi’s flow state theory (1997), which describes flow as a desirable state of deep engagement with a task putting aside all other concerns. In thinking about flow with respect to complex activities like programming, Pearce and Howard are concerned with attentional thrashing between a task and the artifacts of the tool used to accomplish that task (2004). We believe that Metafor addresses the flow concern of programming; a person is naturally engaged when he expresses the task as a story, and Metafor’s automatic creation of scaffolding code from a person’s narrative leaves a person free to focus on the high-level task without the disruptions of programmatic concerns. Error! Reference source not found.Error! Reference source not found.4.1
25] We have built a system that can automatically generate a model of a person’s attitudes and opinions from an automated analysis of a corpus of personal texts, consisting of, inter alia, weblogs, emails, webpages, instant messages, and interviews. “What Would They Think?” (Fig. 1) displays a handful of these digital personas together, each reacting to inputted text differently. The user can see visually the attitudes and disagreements of strong personalities in a community. Personas are also capable of explaining why they react as they do, by displaying some text quoted from that person when the face is clicked.
26] To build a digital persona, the attitudes that a person exhibits in his/her personal texts are recorded into an affective memory system. Newly presented text triggers memories from this system and forms the basis for an affective reaction. Mining attitudes from text is achieved through natural language processing and commonsense-based textual affect sensing (Liu et al., 2003). This approach to person modeling is quite novel when compared to previous work on the topic (cf. behavior modeling, e.g. (Sison & Shimura, 1998), and demographic profiling, e.g. questionnaire-derived user profiles).
27] A related paper on this work (Liu, 2003b) gives a more thorough technical treatment of the system for modeling human affective memory from personal texts. This paper does not dwell on the implementation-level details of the system, but rather, describes the computational model of attitudes in a more practical light, and discusses how these models are incorporated to build the intelligent user interface “What Would They Think?”.
28] This paper is structured as follows. First, we introduce a computational model of a person’s attitudes, a system for automatically acquiring this model from personal texts, and methods for applying this model to predict a person’s attitudes. Second, we present how a collection of digital personas can portray a community in “What Would They Think?” and an evaluation of our approach. Third, we situate our work in the literature. The paper concludes with further discussion and presents directions for future work.
29] COMPUTING A PERSON’S ATTITUDES
30] “better,” “more” “best,” “most” “each drink has,” “each drink has a price” Rarely is something constructed de novo. ; in fact, t (other than naïve physics)precisely precisely sp u needed for a venture in programmatizing natural language In particular, Just as stories are written under the assumption that the reader possesses certain basic knowledge, so should a computer reader possess a sufficient background library of prototypes.
31] Having at least touched upon many of the interesting programmatic semantics of natural language, the next section describes how some of this theoretical discussion is computationalized in the Metafor system interpreter.In this section we present a brief overview of the Metafor implementation and then we discuss some simplifying assumptions which were made to the theory presented in Section 3.
32] The Metafor system can broken down into the following components:, although admittedly the current implementation does not actively dialog with the user for any substantive decision making The user interface given in Figure 1 is still very much a work in progress. The upper-right window which current offers an under-the-hood view into the system’s internal state is not as accessible as it could be to novices. We would also like to add a history feature and vital features currently missing from the interface such as undo, or allowing the user to modify the visualized code.Given the complexity of performing a judicious evaluation, and the relatively small size of the study’s sampling, we prefer to view this as an indicative study, one which may prelude a fuller study not completed at submission press time.The pool of vconsisted of allsevensix (no other types were sought for this study) Each volunteer was taken through an interview-style assessment. We felt that to measure improvements in non-programmer’s intuition would have required a longer-term study and a more robust implementation. Thus, pResults are given in Figures 2 and 3. for something which could tutor them to code, and which they could have unlimited access to, unlike a real tutor, perhaps helping her to make her story articulation more precise or more rigorous; and.tTt Also to be sure, (that would imply that we have solved the deep story understanding problem, which we have not)From the comments of those we have played with Metafor, wW They also found it fun and engaging, and would consider using it.
33] .Our approach to modeling attitudes is based on the analysis of personal texts using natural language parsing and the commonsense-based textual affect sensing work described in (Liu et al., 2003). Personal texts are broken down into units of affective memory, consisting of concepts, situations, and “episodes”, coupled with their emotional value in the text. The whole attitudes model can be seen as an affective memory system that valuates the affect of newly presented concepts, situations, and episodes by the affective memories they trigger.
34] In this section, we first present a bipartite model of the affective memory system. Second, we describe how such a model is acquired automatically from personal texts. Third, we discuss methods for applying the model to predict a user’s affective reaction to new texts. Fourth, we describe how some advanced features enrich our basic person modeling approach.
35] A Bipartite Affective Memory System
36] A person’s affective reaction to a concept, topic, or situation can be thought of as either instinctive, due to attitudes and opinions conditioned over time, or reasoned, due to the effect of a particularly vivid recalled memory. Borrowing from cognitive models of human memory function, attitudes that are conditioned over time can be best seen as a reflexive memory, while attitudes resulting from the recall of a past event can be represented as a long-term episodic memory (LTEM). Memory psychologist Endel Tulving equates LTEM with “remembering” and reflexive memory with “knowing” and describes their functions as complementary (Tulving, 1983). We combine the strengths of these two types of memories to form a bipartite, episode-reflex model of the affective memory system.
37] Affective long-term episodic memory
38] Long-term episodic memory (LTEM) is a relatively stable memory capturing significant experiences and events. The basic unit of memory captures a coherent series of sequential events, and is known as an episode. Episodes are content-addressable, meaning, that they can be retrieved through a variety of cues encoded in the episode, such as a person, location, or action. LTEM can be powerful because even events that happen only once can become salient memories and serve to recurrently influence a person’s future thinking. In modeling attitudes, we must account for the influence of these particularly powerful one-time events.
39] In our affective memory system, we compute an affective LTEM as an episode frame, coupled with an affect valence score that best characterizes that episode. In Fig. 2, we show an episode frame for the following example episode: “John and I were at the park. John was eating an ice cream. I asked him for a taste but he refused. I thought he was selfish for doing that.”
40]
41]
42]
43]
44]
45]
46] Figure 2. An episode frame in affective LTEM.
47] As illustrated in Fig. 2, An episode frame decomposes the text of an identified episode into simple verb-subject-argument propositions like (eat John “ice cream”). Together, these constitute the subevents of the episode. The moral of an episode is important because the episode-affect can be most directly attributed to it. Extraction of the moral, or root cause, is done through heuristics which are discussed elsewhere (Liu, 2003b). Tulving’s encoding specificity hypothesis (1983) suggests that contexts such as date, location, and topic are useful to record because an episode is more likely to be triggered when current conditions match the encoding conditions. The affect valence score is a numeric triple representing (pleasure, arousal, dominance). This will be covered in more detail later in the paper.
48] Affective reflexive memory
49] While long-term episodic memory deals in salient, one-time events and must generally be consciously recalled, reflexive memory is full of automatic, instant, almost instinctive associations. Whereas LTEM is content-addressable and requires pattern-matching the current situation with that of the episode, reflexive memory is like a simple lookup-table that directly associates a cue with a reaction, thereby abstracting away the content. In humans, reflexive memories are generally formed through repeated exposures rather than one-time events, though subsequent exposures may simply be recalls of a particularly strong primary exposure (Locke, 1689). In addition to frequency of exposures, the strength of an experience is also considered. Complementing the event-specific affective LTEM with an event-independent affective reflexive memory makes sense because there may not always be an appropriate distinct episode which shapes our appraisal of a situation; often, we react reflexively – our present attitudes deriving from an amalgamation of our past experiences now collapsed into something instinctive.
50] Because humans undergo forgetting, belief revision, and theory change, update policies for human reflexive memory may actually be quite complex. In our computational, we adopt a more simplistic representation and update policy that is not cognitively motivated, but instead, exploits the ability of a computer system to compute an affect valence at runtime.
51] The affective reflexive memory is represented by a lookup-table. The lookup-keys are simple concepts which can be semantically recognized as a person, action, object, activity, or named event. These keys act as the simple linguistic cues that can trigger the recall of some affect. Associated with each key is a list of exposures, where each exposure represents a distinct instance of that concept appearing in the personal texts. An exposure, E, is represented by the triple: (date, affect valence score V, saliency S). At runtime, the affect valence score associated with a given conceptual cue can be computed using the formula given in Eq. (1).
52] [pic] (1)
53] where n = the number of exposures of the concept
54] This formula returns the valence of a conceptual cue averaged over a particular time period. The term, [pic], rewards frequency of exposures, while the term, [pic], rewards the saliency of an exposure. In this simple model of an affective reflexive memory, we do not consider phenomena such as belief revision, reflexes conditioned over contexts, or forgetting.
55] To give an example of how affective reflexive memories are acquired from personal texts, consider Fig. 3, which shows two excerpts of text from a weblog and a snapshot sketch of a portion of the resulting reflexive memory.
56]
57]
58]
59]
60]
61]
62]
63] Figure 3. How reflexive memories get recorded from excerpts.
64] In the above example, two text excerpts are processed with textual affect sensing and concepts, both simple (e.g. telemarketer, dinner, phone), and compound (e.g. telemarketer::call, interrupt::dinner, phone::ring) are extracted. The saliency of each exposure is determined by heuristics such as the degree to which a particular concept in topicalized in a paragraph. The resulting reflexive memory can be queried using Eq. (1). Note that while a query on 3 Oct 01 for “telemarketer” returns an affect valence score of (-.15, .25, .1), a query on 5 Oct 01 for the same concept returns a score of (-.24, .29, .11). Recalling that the valence scores correspond to (pleasure, arousal, dominance), we can interpret the second annoying intrusion of a telemarketer’s call as having conditioned a further displeasure and a further arousal to the word “telemarketer”.
65] Of course, concepts like “phone” and “dinner” also unintentionally inherit some negative affect, though with dinner, that negative affect is not as substantial because the saliency of the exposure is lower than with “telemarketer.” (“dinner” is not so much the topic of that episode as “telemarketer”). Also, if successive exposures of “phone” are affectively ambiguous (sometimes used positively, other times negatively), Eq. (1) tends to cancel out inconsistent affect valence scores, resulting in a more neutral valence.
66] In summary, we have motivated and characterized the two components of the affective memory system: an episodic component emphasizing the affect of one-time salient memories, and a reflexive component, emphasizing instinctive reactions to conceptual cues that are conditioned over time. In the following subsection, we propose how this bipartite affective memory system can be acquired automatically from personal texts.
67] Model Acquisition from Personal Texts
68] The bipartite model of the affective memory system presented above can be acquired automatically from an analysis of a corpus of personal texts. Fig. 4 illustrates the model acquisition architecture. [pic]
69] Figure 4. An architecture for acquiring the affective memory system from personal texts.
70] Though there are some challenging tasks in the natural language extraction of episodes and concepts, such as the heuristic extraction of episode frames, these details are discussed elsewhere (Liu, 2003b). In this subsection, we focus on three aspects of model acquisition, namely, establishing the suitability criteria for personal texts, choosing an affective representation of attitudes, and assessing the affective valence of episodes and concepts.
71] What Personal Texts are Suitable?
72] In deciding the suitability of personal texts, it’s important to keep in mind that we want a text that is both a rich source of opinion, and also amenable to natural language processing by the computer. First, texts should be first-person, opinion narratives. It is still rather difficult to extract a person’s attitudes given a non-autobiographical text because the natural language processing system would have to robustly decide which opinions belong to which persons (we save this for future work). It is also important that the text be of a personal nature, relating personal experiences or opinions. Attitudes and opinions are not easily accessible in third-person texts or objective writing, especially for a rather naïve computer reading program. Second, texts should explore a sufficient breadth of topics to be interesting. An insufficiently broad model gives a poor and disproportional sampling of a person and would hardly justify the embodiment of such a model into a digital persona. It should be noted however, that there is plausible reason to intentionally partition a person’s text corpus into two or more digital personas. Perhaps it would be interesting to contrast an old Marvin Minsky versus a young one, or a Marvin who is passionate about music versus a Marvin who is passionate about A.I. Third, texts should cover everyday events, situations, and topics, because that is the optimal discourse domain of recognition of the mechanism with which we will judge the affect of text. Fourth, texts should ideally be organized into episodes, occurring over a substantial period of time relative to the length of a person’s life. This is a softer requirement because it is still possible to build a reflexive memory without episode partitioning. Weblogs are an ideal input source because of their episodic organization, although instant messages, newsgroups, and interview transcripts are also good input sources because they are so often rich in opinion.
73] Representing Affect using the PAD Model
74] Affect valence pervading the proposed models can take one of two potential representations. They take an atomistic view that emotions existing as a part of some finite repertoire, as exemplified by Manfred Clyne’s “sentics” schema (1977). Or, they can take the form of a dimensional model, represented prominently by Albert Mehrabian’s Pleasure-Arousal-Dominance (PAD) model (1995). In this model, the three nearly independent dimensions are Pleasure-Displeasure (i.e., feeling happy or unhappy), Arousal-Nonarousal (i.e., arousing one’s attention), and Dominance-Submissiveness (i.e., the amount of confidence/lack-of-confidence felt). Each dimension can assume values from –100% to +100%, and a PAD valence score is a 3-tuple of these values (e.g. [-.51, .59, .25] might represent anger).
75] We chose a dimensional model, namely, Mehrabian’s PAD model, over the discrete canonical emotion model because PAD represents a sub-symbolic, continuous account of affect, where different symbolic affects can be unified along one of the three dimensions. This model has robustness implications for the affective classification of text. For example, in the affective reflexive memory, a conceptual cue may be variously associated with anger, fear, and surprise, which can be unified along the Arousal dimension of the PAD model, thus enabling the affect association to be coherent and focused.
76] Affective Appraisal of Personal Text
77] Judging the affect of a personal text has three chief considerations. First, the mechanism for judging the affect should be robust and comprehensive enough to correctly appraise the affect of a breadth of concepts. Second, to aid in the determination of saliency, the mechanism must be able to appraise the affect of very little text, such as on the sentence-level. Third, the mechanism should recognize specific emotions rather than convolving affect onto any single dimension.
78] Several common approaches fail to meet the criteria. The naïve keyword spotting approach looks for surface language features like keywords. However, this approach is not acceptably robust on its own because affect is often conveyed without mood keywords. Statistical affect classification using statistical learning models such as latent semantic analysis (Deerwester et al., 1990) generally require large inputs for acceptable accuracy because it is a semantically weak method. Hand-crafted models and rules are not broad enough to analyze the desired breadth of phenomena.
79] To analyze personal text with the desired robustness, granularity, and specificity, we employ a model of textual affect sensing using real-world knowledge, proposed by Liu et al. (2003). In this model, defeasible knowledge of everyday people, things, places, events, and situations is leveraged to sense the affect of a text by evaluating the affective implications of each event or situation. For example, to evaluate the affect of “I got fired today,” this model evaluates the consequences of this situation and characterizes it using negative emotions such as fear, sadness, and anger. This model, coupled with a naïve keyword spotting approach, provides rather comprehensive and robust affective classification. Since the model uses knowledge rather than word statistics, it is semantically strong enough to evaluate text on the sentence level, classifying each sentence into a six-tuple of valences (ranging from a value of 0.0 to 1.0) for each of the six basic Ekman emotions of happy, sad, angry, surprised, fearful, and disgusted (an atomistic view of emotions) (Ekman, 1993). These emotions are then mapped to the PAD model.
80] One point of potential paradox should be addressed. The real-world knowledge-based model of affect sensing is based on defeasible commonsense knowledge from the Open Mind Commonsense corpus (Singh et al., 2002), which is in turn, gathered from a web community of some 11,000 teachers. Therefore, the affective assessment of text made by such a model represents the judgment of a typical person. However, sometimes a personal judgment of affect is contradicted by the typical judgment. Thus, it would seem paradoxical to attempt to learn that a situation has a personally negative affect when the typical person judges the situation as positive. To overcome this difficulty, we implement, in parallel, a mood keyword-spotting affect sensing mechanism to confirm and contradict the assessment of the primary model. In addition, we make the assumption that although a personal affect judgment may deviate from that of a typical person on small particulars, it will not deviate on average, when examining a large text. The implication of this is that on a slightly larger granularity than a sentence, the affective appraisal is more likely to be accurate. In fact, accuracy should increase proportional to the size of the textual context being considered. The evaluation of Liu et al.’s affective navigation system (2003b) yields some indirect support for the idea that accuracy increases with the size of the textual context. In that user study, users found affective categorizations of textual units on the order of chapters to be more accurate and useful to information navigation than affective categorizations of small textual units such as paragraphs.
81] To assess the affect of a sentence, we factor in the affective assessment of not only the sentence itself, but also of the paragraph, section, and whole journal entry or episode. Because so much context is factored into the affect judgment, only a modest amount of affective information can learned for any given sentence. Thus we rely on the confirming effects of being able to encounter an attitude multiple times. In exchange for only being able to learn a modest amount from a sentence, we also minimize the impact of erroneous judgments.
82] In summary, digital personas can be automatically acquired from personal texts. These texts should feature the explicit expression of the opinions of the person to be modeled, and should be of a certain form required by the natural language processing. Natural language processed texts are analyzed for its affective content at varying textual granularities (e.g. sentence-, paragraph-, and section- level) so as to minimize the possibility of error. This is necessary because our textual affect sensing tool evaluates a typical person’s affective reaction to a text, and not any particular person’s. Affect valence is represented using the PAD dimensional model of affect, whose continuity allows affect valences to be more easily summed together. The resulting affect valence is recorded with a concept in the reflexive memory, and an episode in the episodic memory.
83] Predicting Attitudes using the Model
84] Having acquired the model, the digital persona attempts to predict the attitudes of the person being modeled by offering some affective reaction when it is fed some new text. This reaction is based on how the new text triggers the reflex concepts and the recall of episodes in the affective memory system. When a reflex memory or episode is triggered, the affective valence score associated with that memory gets attached to the affective context of the new text. The gestalt reaction to the new text is a weighted summation of the affect valence scores of the triggered memories.
85] The triggering process is somewhat complex. The triggering of episodes requires the detection of an episode in the new text, and heuristically pattern matching this new episode frame to the library of episode frames. The range of concepts that can trigger a reflex memory is increased by the addition of conceptual analogy using OMCSNet, a semantic network of commonsense knowledge. The details of the triggering process is omitted here, but is discussed elsewhere (Liu, 2003b).
86] This process of valuating some new text by triggering memories out of the context in which they were encoded, and inheriting their affect valences, is error prone. We rely on the observation that if many memories are triggered, their contextual intersection is more likely to be accurate. Ultimately, the performance of the digital persona in reproducing the attitudes of the person being model is determined by the breadth and quality of the corpus of personal texts gathered on the person. The digital persona cannot predict attitudes that are not explicitly exhibited in the personal texts.
87] Enriching the Basic Model
88] The basic model of a person’s attitudes focuses on applying a person’s self-described memories to valuate new textual episodes. While this basic model is sufficient to produce reactions to text for which there exists some relevant personal memories, the generated digital personas are often quite “sparse” in what they can react to. We have proposed and evaluated some advancements to the basic model. In particular, we have looked at how a person’s attitude model can be enriched by the attitude models of people with whom the modeled person fashions himself/herself after – perhaps a good friend or mentor. More technically, we mean an imprimer.
89] Marvin Minsky describes an imprimer as someone to which one becomes attached. (Minsky, forthcoming) He introduces the concept in the context of attachment-learning of goals, and suggests that imprimers help to shape a child’s values. Imprimers can be a parent, mentor, cartoon character, a cult, or a person-type. The two most important criteria for an imprimer are that 1) the imprimer embodies some image, filled with goals, ideas, or intentions, and that 2) one feels attachment to the imprimer.
90] We extend this idea in the affect realm and make the further claim that internal imprimers can do more than to critique our goals; our attachment to them leads us to the willful emulation of a portion of their values and attitudes. Keeping a collection of these internal imprimers, they help to support our identity. From the supposition that we conform to many of the attitudes of our internal imprimers, we hypothesize that affective memory models of these imprimers, if known, can complement the person’s own affective memory model in helping to predict a person’s attitudes. This hypothesis is supported by much of the work in psychoanalysis. Sigmund Freud (1991) wrote of a process he called introjection, in which children unconsciously emulate aspects of their parents, such as the assumption of their parent’s personalities and values. Other psychologists have referred to introjection by terms like identification, internalization, and incorporation.
91] We propose the following model of internal imprimers to support attitude prediction. First, it is necessary to identify people, groups, and images that may possibly be a person’s imprimer. We can do so but analyzing the affective memory. From a list of all conceptual cues from both the episodic and reflexive memories, we use semantic recognizers to identify all people, groups (e.g. “my company”) and images (e.g. “dog”=> “dog-person”) that on average, elicit high Arousal and high Submissiveness, show high frequency of exposure in the reflexive memory, and collocate in past episodes with self-conscious emotion keywords like “proud”, “embarrassed”, “ashamed”.
92] [pic]
93] Figure 5. Affective models of internal imprimers, organized into personas, complements one’s own affective model
94] Once imprimers are identified, we also wish to identify the context under which an imprimer’s attitudes show influence. Shown in Fig. 5, we propose organizing the internal imprimer space into personas representing different contextual realms. There is good reason to believe that humans organize imprimers by persona because we are different people for different reasons. One might like Warren Buffett’s ideas about business but probably not about cooking. Personas can also prevent internal conflicts but allowing a person to maintain separate systems of attitudes in different contexts. To identify an imprimer’s context, we must first agree on an ontology of personas, which can be person-general (as the personas in Fig. 5 are) or person-specific. Once imprimers are associated with personae, we gather as much “personal” text from each imprimer as desired and acquire only the reflexive memory model, thus relaxing the constraint that texts have episodic organization. In this augmented attitude prediction strategy (depicted in Fig. 3), when conceptual cues are unfamiliar to the self, we identify internal imprimers whose persona matches the genre of the new episode, and give them an opportunity to react to the cue. These affective reactions are multiplied by a coefficient representing the ability of this self to be influenced, and the valence score is added on to the episode. Rather than maintaining all attitudes in the self, internal imprimers enable judgments about certain things to be mentally outsourced to the persona-appropriate imprimers.
95] We have implemented and evaluated the automated identification and modeling acquisition of imprimer personas in cases where the imprimers are people. Our implemented system is not yet able to use abstract non-person imprimers, e.g. “dog-person”.
96] [pic]
97] Figure 6. The imprimer-augmented attitude prediction strategy. Edges represent memory triggers.
98] In summary, we have presented a reflex-episode model of affective memory as a memory-based representation of a person’s attitudes. The model can be acquired automatically from personal text using natural language processing and textual affect analysis. The model can be applied over new textual episodes to produce affective reactions that aim to emulate the actual reactions of the person being modeled. (Fig. 6). We have also discussed how the basic attitudes model can be enriched with added information about the attitudes of the mentors of the person being modeled.
99] In the following section, we abstract away the details of the attitudes model presented in this section to examine how digital personas can be portrayed graphically and how a collection of digital personas can portray the personalities of a community.
100] WHAT WOULD THEY THINK?
101] While modeling a person’s attitudes is fun in the abstract, it lacks the motivation and the verifiability of a real application of the theory and technology. What Would They Think? (Fig. 1) is a graphical realization of the modeling theory discussed in the previous section. What Would They Think? has been implemented and is currently being evaluated through user studies, though the underlying attitude models have already been evaluated in a separate study. In this section, we discuss the design of our interface, present some scenarios for its use, and report how this work has been evaluated.
102] Interface Design
103] Digital personas acquired from an automatic analysis of personal text, are represented visually with pictures of faces, which occupy a matrix. Given some new text typed or spoken into the “fodder” box, each persona expresses an affective reaction through modulations in the graphical elements of the face icon. Each digital persona is also capable of some introspection. When clicked, a face can explain what motivated its reaction by displaying a salient quote from its personal text.
104] Why a static face? Visualizing a digital persona’s attitudes and reactions with the face of the person being represented is better than with something textual or abstract. There are several reasons why a face is a superior representation. People are already wired with a cognitive faculty for quickly recognizing and remembering faces, and a face acts as a unique cognitive container for a person’s individual identity and personality. In the user task of understanding a person’s personality, it is easier to attribute personality traits and attitudes to a face than to text or an abstract graphic. For example, people-watching is a past-time in which we imagine the personality and identity behind a stranger’s face (Whyte, 1988). A community of faces is more socially evocative than either a community of textual labels or abstract representations, for those representations are not designed as convenient containers of identity and personality.
105] Having decided on a face representation, should the face be abstract or real, static or animated? While verisimilitude is the goal for many facial interfaces, we must be careful to not portray more detail in the face than our attitude model is capable of elucidating, for the face is fraught with social cues, and unjustified cues could do more harm than good. By conveying attitudes through modulations in the graphical elements of a static face image, rather than through modulations of expression and gaze in an animated face, we are emphasizing the representational aspect of the face, over the real. Scott McCloud has explored extensively the representational-vs.-real tradeoff of face drawing in comics (1993).
106] Modulating the Face. In the expression of an affective reaction, it is nice to be able to preserve the detail of the continuous, dimensional output of the digital persona. The information should also be conveyed as intuitively as possible. Thus an intuitive mapping may be best achieved through the use of visual metaphors to represent affective states of the person (Lakoff & Johnson, 1980). We often describe a happy person as being “colorful”, while “face turns colorless” usually represents negative emotions like fear and melancholy. A person whose attention or passion is aroused has a face that “lights up”. And someone who isn’t sure or confident about a topic feels “fuzzy” toward it. Taking these metaphors into consideration, a rather straightforward scheme is used to map the three affect dimensions of pleasure, arousal, and dominance onto the three graphical dimensions of color saturation, brightness, and focus, respectively. A pleasurable reaction is manifested by a face with high color saturation, while a displeasurable reaction maps to an unsaturated, colorless face. This mapping creates an implicit constraint that the face icon be in color. An aroused reaction results in a brightly lit icon, while a non-aroused reaction results in a dimly lit icon. A dominant (confident) reaction maps to a sharp, crisp image, while a submissive (unconfident) reaction maps to a blurry, unfocused image. While better mapping schemes may exist, our experience with users who have worked with this interface tells us that the current scheme conveys the affect reaction quite intuitively. This makes the assumption that the original face icons are all of good quality – in color, bright enough, and in focus.
107] Populating a Community. An n x n matrix can hold a small collection of digital personas. The matrix can either be configured automatically or manually. Each matrix cell can be manually configured to house a digital persona by specifying a persona .MIND file and a face icon. A user can build and later augment a digital persona by specifying a weblog url, homepage url, or some personal text pasted into the window. The matrix can also be configured automatically to represent a community. Plug-in scripts have been created to automatically populate the matrix with certain types of communities, including a niche community of weblogs known as a “blog ring,” a circle of friends in the online networking community called “,” a group of potential mates on an online dating website called “,” and a usenet community.
108] Currently, only a blog ring community can generate fully specified digital personas. The Friendster and communities’ personal text corpora are rather small profile texts. As a result, only a fairly shallow reflexive memory can be built. The episodic memory is not meaningful for these texts. The personal texts for usenet communities are rather inconsistent in quality. For example, a usenet community based on question and answers will not be as good a source of explicit opinions as a community based on discussion of issues. Also, usenet communities pose the problem of not providing a face icon for each user. In this case, the text of each person’s name labels each matrix cell, accompanied by a default face icon in the background, which is necessary to convey the affective reaction.
109] Introspection. A digital persona is capable of some limited introspection. To inquire what motivated a persona to express a certain reaction to some text, the face icon can be clicked. An explanation will be offered taking the form of a quote or a series of quotes from the personal text. These quotes are generated by backpointers to the text associated with each affective memory. For episodic memory, a particularly salient episode can justify a reaction, while there may need to be many quotes to justify a triggered reflex memory. With the capability for some introspection and explanation, a user can verify whether or not an affective reaction is indeed justified. This lends the interface some fail-softness, as a user will not be completely mislead when a person’s attitude is erroneously represented by the system.
110] Use Cases
111] How can a person use the What Would They Think? interface to understand the personalities and attitudes of people in a community? The system supports several use cases.
112] In the basic use case, the user, a new entrant to a community, is presented with an automatically generated matrix of some people in the community. The user can employ a hypothesis-testing approach to understanding personalities. The user types some very opinionated statements into the “fodder” box as a litmus test in understanding the attitudes of the different people toward that statement. Faces that lighting up in color versus black and white provide an illustrative contrast of the strong disagreements in the community. A user can inquire as to the source of strong opinions by clicking on a face and viewing a motivating quote. A user can reorganize the matrix so as to cluster personalities perceived to be similar. Assuming that the personal texts for each persona in the community is of comparable length, depth, and quality, the user may notice over a series of interactions that certain personas are negative more often than not, or certain other personas are aroused more intensely more often than other personas. These may lead a user to conclude that certain personalities are more cynical, and others more easily excitable.
113] Another use case is gaging the interests and expertise of people in a community. Because people generally talk more about things that interest them and have more to say on topics they are more familiar with, a digital persona modeled on such texts will necessarily exhibit more reaction to texts that are interesting to the person being or falls in their area of expertise. In this use case, a user can, for example, copy-and-paste a news article into the fodder box and assess which personas are interested or have expertise toward a particular topic.
114] A third use case involves community-assisted reading. The matrix fodder box can be linked to a cursor position in a text file browser. As a user reads through a webpage, story, or news article, he/she can get a sense of how the community might read and react to the text currently being read.
115] Evaluation
116] The quality of the attitude prediction in What Would They Think? has been formally evaluated through user studies. We are also currently conducting user studies to evaluate the effectiveness of the matrix interface in assisting a person to learn about and understand a community. These results will be available by press time.
117] The quality of attitude prediction was evaluated experimentally, working with four subjects. Subjects were between the ages of 18 and 28, and have kept diary-style weblogs for at least 2 years, with an average entry interval of three-to-four days. Subjects submitted their weblog urls, for the generation of affective memory models. An imprimer identification routine was run, and the examiner hand-picked the top one imprimer for each of the three persona domains implemented: social, business, and domestic. A personal text corpus was built, and imprimer reflexive memory models were generated. The subjects were engaged in an interview-style experiment with the examiner.
118] In the interview, subject and their corresponding PERSONA models were asked to evaluate 12 short texts representative of three genres: social, business, and domestic (corresponding to the ontology of personas in the tested implementation). The same set of texts was presented to each participant and the examiner chose texts that were generally evocative. They were asked to summarize their reaction by rating three factors on Likert-5 scales.
119] Feel negative about it (1)…. Feel positive about it (5)
120] Feel indifferent about it (1) … Feel intensely about it (5)
121] Don’t feel control over it (1)… Feel control over it (5)
122] These factors are mapped onto the PAD valence format, assuming the following correspondence: 1(-1.0, 2( -0.5, 3(0.0, 4( +0.5, and 5( +1.0. Subjects’ responses were not normalized. To assess the quality of attitude prediction, we record the spread between the human-assessed and computer-assessed valences,
123] [pic] (2)
124] We computed the mean spread and standard deviation across all episodes along each PAD dimension. On the –1.0 to +1.0 valence scale, the maximum spread is 2.0. Table 1 summarizes the results.
125] Table 1. Performance of attitude prediction, measured as the spread between human and computer judged values.
126]
| |Pleasure |Arousal |Dominance |
| |mean |std. |mean |std. |mean |std. |
| |spread |dev. |spread |dev. |spread |dev. |
|SUBJECT 1 |0.39 |0.38 |0.27 |0.24 |0.44 |0.35 |
|SUBJECT 2 |0.42 |0.47 |0.21 |0.23 |0.48 |0.31 |
|SUBJECT 3 |0.22 |0.21 |0.16 |0.14 |0.38 |0.38 |
|SUBJECT 4 |0.38 |0.33 |0.22 |0.20 |0.41 |0.32 |
127] Assuming that human reactions obeyed a uniform distribution over the Likert-5 scale, we give two baselines, which were simulated over 100,000 trials. In BASELINE 1, [pic] is fixed at 0.0 (neutral reaction to all text). In BASELINE 2, [pic] is given a random value over the interval [-1.0,1.0] with a uniform distribution (arbitrary reaction to all text). It should be pointed out however, that in the context of an interactive sociable computer, BASELINE 1 is not a fair comparison, because it would never produce any behavior.
128] On average, our approach performed noticeably better than both baselines, excelling particularly in predicting arousal, and having the most difficulty predicting dominance. The standard deviations were very high, reflecting the observation that predictions were often either very close to the actual valence, or very far. This can be attributed to one of several causes. First, multiple episodes described in the same journal entries may have caused the wrong associations to be learned. Second, the reflexive memory model does not account for conflicting word senses. Third, personal texts inputted for the imprimers often generated models skewed to positive or negative because text did not always have an episodic organization. While results along the pleasure and dominance dimensions are weaker, the arousal dimension recorded a mean spread of 0.22, suggesting the possibility that it alone may have immediate applicability.
129] Table 2. Performance of attitude prediction that can be attributed to imprimers and episodic memory
130] In the experiment, we also analyzed how often the episodic memory, reflexive memory, and imprimers were triggered. Episodes were on average, 4 sentences long. For each episode, reflexive memory was triggered an average of 21.5 times, episodic memory 0.8 times, and imprimer reflexive memory 4.2 times. To measure the effect of imprimers and episodic memories, we re-ran the experiment turning off imprimers only, episodic memory only, and both. Table 2 summarizes the results.
131] These results suggest that the positive effect of episodic memory was negligible on the results. This certainly has to do with its low rate of triggering, and the fact that episodic memories were weighted only slightly more than reflexive memories. The low trigger rate of episodic memory can also be attributed to the strict criteria that three conceptual cues in an episode frame must trigger in order for the whole episode to trigger. These results also suggest that imprimers played a measurable role in improving performance, which is a very promising result.
132] Overall, the evaluation demonstrates that the proposed attitude prediction approach is promising, but needs further refinement. The randomized BASELINE 2 is a good comparison when considering possible entertainment applications, whose interaction is more fail-soft. The approach does quite well against the active BASELINE 2, and is within the performance range of these applications. Taking into account possible erroneous reactions, we were careful to pose What Would They Think? as a fail-soft interface. The reacting faces are evocative, and encourage the user to click on a face for further explanation. Used in this manner, the application is fail-soft because users can decide on the basis of the explanations whether the reaction is justified or mistaken. We expect that ongoing studies of the usefulness of the What Would They Think? intelligent will show that its use is fail-soft: the generated reactions are evocative and encourage the user to further verify and investigate a purported attitude. We do not suggest that the approach is yet ready for fail-hard applications, such as deployment as a sociable software agent, because fallout (bad predictions) can be very costly in the realm of affective communication (Nass et al., 1994).
133] RELATED WORK
134] The community of personalities metaphor has been previously explored with Guides (Oren et al., 1990), a multi-character interface that assisted users in browsing a hypermedia database. Each guide embodied a specific character (e.g. preacher, miner, settler) with a unique “life story.” Presented with the current document that a user is browsing, each guide suggested a recommended follow-up document, motivated by the guide’s own point-of-view. Each guide’s recommendations is based on a manually constructed bag of “interests” keywords.
135] Our affective memory -based approach to modeling a person’s attitudes appears to be unique in the literature. Existing approaches to person modeling are of two kinds: behavior modeling, and demographic profiling. The former approach models the actions that users take within the context of an application domain. For example, intelligent tutoring systems track a person’s test performance (Sison & Shimura, 1998), while online bookstores track user purchasing and browsing habits and combine this with collaborative filtering to group similar users (Shardanand & Maes, 1995). The latter approach uses gathered demographic information about a user, such as a “user profile”, to draw generalized conclusions about user preferences and behavior.
136] Neither of the existing approaches are appropriate to the modeling of “digital personas.” In behavior modeling, knowledge of user action sequences are generally only meaningful in the context of a particular application and does not significantly contribute to a picture of a person’s attitudes and opinions. Demographic profiling tends to overgeneralize people by the categories they fit into, is not motivated by personal experience, and often requires additional user action such as filling out a user profile.
137] Memory-based modeling approaches have also been tried in related work on assistive agents. Brad Rhode’s Remembrance Agent (Rhodes & Starner, 1996) uses an associative memory to proactively suggest relevant information. Sunil Vemuri’s project, “What Was I Thinking?” (2004) is a memory prosthesis that records audio from a wearable device, and intelligently segments the audio into episodes, allowing the “audio memory” to be more easily browsed.
138] CONCLUSION
139] Learning about the personalities and dynamics of online communities has been up to now a difficult problem with no good technological solutions. In this paper, we propose What Would They Think? an interactive visual representation of the personalities in a community. A matrix of digital personas reacts visually to what a user types or says to the interface, based on predictions of attitudes actually held by the persons being modeled. Each digital persona’s model of attitudes is generated automatically from an analysis of some personal text (e.g. weblog), using natural language processing and textual affect sensing to populate an associative affective memory system. The whole application enables a person to understand the personalities in a community through interaction rather than by reading narratives. Patterns of reactions observed over a history of interactions can illustrate qualities of a person’s personality (e.g. negativity, excitability), interests and expertise, and also qualities about the social dynamics in a community, such as the consenses and disagreements held by a group of individuals.
140] The automated, memory-based personality modeling approach introduced in this paper represents a new direction in person modeling. Whereas behavior modeling only yields information about a person within some narrow application context, and whereas demographic profiling paints an overly generalized picture of a person and often requires a profile to be filled out, our modeling of a person’s attitudes from a “memory” of personal experiences paints a richer, better-motivated picture about a person that has a wider range of potential applications than application-specific user models. User studies concerning the quality of the attitude prediction technology are promising and suggest that the currently implemented approach is strong enough to be used in fail-soft applications. In What Would They Think? the interface is designed to be fail-soft. The reactions given by the digital personas are meant to be evocative. The user is encouraged to further verify and investigate a purported attitude by clicking on a persona and viewing a textual explanation of the reaction.
141] In future work, we intend to further develop the modeling of attitudes by investigating how particularly strong beliefs such as “I love dogs” can help to create a model of a person’s identity. We also intend to investigate other applications for our person modeling approach, such as virtual mentors and guides, marketing, and document recommendation.
142] ACKNOWLEDGMENTS
143] The authors would like to thank Deb Roy, Barbara Barry, Push Singh, Andrea Lockerd, Marvin Minsky, Henry Lieberman, and Ted Selker for their comments on this work.
144] REFERENCES
145]
146]
147] Mihayi Csikszentmihalyi: 1997, Finding flow: the psychology of engagement with everyday life. 1st ed. MasterMinds. 1997, New York: Basic Books.
148] David A. Kolb: 1985, Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall.arXiv:cs.AI/0003003 Retrieved from:
149]
150]
151] Jon M. Pearce & Steve Howard: 2004, Designing for Flow in a Complex Activity. 6th Asia-Pacific Conference on Computer-Human Interaction. Springer-Verlag.Deerwester, S. et al. (1990). Indexing by latent semantic anlysis. Journal of the American Society of Information science:416(6), pp 391-407.
152] Ekman, P. (1993). Facial expression of emotion. American Psychologist, 48, 384-392.
153] Freud, S. (1991). The essentials of psycho-analysis: the definitive collection of Sigmund Freud's writing selected, with an introduction and commentaries, by Anna Freud. London: Penguin.
154] Lakoff, G. & Johnson, M. (1980). Metaphors We Live By, University of Chicago Press.
155] Liu, H. (2003b). A Computational Model of Human Affective Memory and Its Application to Mindreading. Submitted to FLAIRS 2004. Draft available at: /~hugo/publications/drafts/Affective-Mindreading-Liu.doc
156] Liu, H., Lieberman, H., Selker, T. (2003). A Model of Textual Affect Sensing using Real-World Knowledge. Proceedings of IUI 2003, pp. 125-132.
157] Liu, H., Selker, T., Lieberman, H. (2003b). Visualizing the Affective Structure of a Text Document. Proceedings of CHI 2003, pp. 740-741.
158] Locke, J. (1689). Essay Concerning Human Understanding Hypertext by ITL at Columbia University, 1995. Print version ed. P.H. Nidditch. Oxford, 1975.
159] McCloud, S. (1993). Understanding Comics, Kitchen Sink Press, Northhampton, Maine,
160] Mehrabian, A. (1995). for a comprehensive system of measures of emotional states: The PAD Model. (Available from Albert Mehrabian, 1130 Alta Mesa Road, Monterey, CA, USA 93940).
161] Minsky, M., (forthcoming). The Emotion Machine, Pantheon, New York. Several chapters are available at: .
162] Nass, C.I., Stener, J.S., and Tanber, E. (1994) Computers are social actors. In Proceedings of CHI ’94, (Boston, MA), pp. 72-78, April 1994.
163] Oren, T., Salomon, G., Kreitman, K. and Don, A. (1990). Guides: characterizing the interface. In Laurel, B. (Eds.) The art of human-computer interface design. Addison-Wesley.
164] Rhodes, B. and Starner, T. (1996). The Remembrance Agent: A continuously running automated information retrieval system. Proceedings of PAAM '96, pp. 487-495.
165] Shardanand, U. and Maes, P. (1995). Social information filtering: Algorithms for automating "word of mouth", Proceedings of CHI'95, 210-217.
166] Singh, P., (2002). The public acquisition of commonsense knowledge. Proceedings of AAAI Spring Symposium. Palo Alto, CA, AAAI.
167] Sison, R. and Shimura, M. (1998). Student modeling and machine learning. International Journal of Artificial Intelligence in Education, 9:128-158.
168] Tulving, E. (1983). Elements of episodic memory. Oxford: New York.
169] V
170] Whyte, W. (1988). City. Doubleday, New York.
-----------------------
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10] a qualifying clique edge is defined here as an edge whose strength is in the 80th percentile, or greater, of all edges
[11] by discounting a random 50% subset of the clique’s edges by a Gaussian factor (0.5 mu, 0.2 sigma).
-----------------------
::: EPISODE FRAME :::
SUBEVENTS:
(eat John “ice cream”),
(ask I John “for taste”),
(refuse John)
MORAL: (selfish John)
CONTEXTS: (date), (park), ()
EPISODE-IMPORTANCE: 0.8
EPISODE-AFFECT: (-0.8,0.7,0)
(refuse John)
Text Excerpts
…2 Oct 01… Telemarketers called harassed me again today, interrupting my dinner. I’m really upset…
…4 Oct 01… The phone rang, and of course, it was a telemarketer. Damn it!
::: REFLEXIVE MEMORY :::
telemarketer = {
[2oct01, (-.3, .5, .2), .5],
[4oct01, (-.8, .8, .3), .4] } ;
dinner = {
[2oct01, (-.3, .5, .2), .2]}
“interrupt dinner” = {…} ;
….
Personality
traits
Explicit
attitudes
Implicit
attitudes
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- probability and one sample z tests
- amazon web services
- z score practice worksheet
- public water supply lead and copper sampling plan
- chapter 8 day 3 percentiles and approximating binomial
- detail gift analysis dga
- interestmap harvesting social network profiles for
- pe 350 exercise physiology
- microsoft word em40s exam
Related searches
- network tutorial for beginners pdf
- network basics for beginners
- network basics for beginners pdf
- a social network is quizlet
- find someone s social profiles for free
- social network essay pdf
- network credentials for this computer
- examples of profiles for resumes
- home network asking for credentials
- money network app for laptop
- windows network asks for network credentials
- network credentials for shared folder