In order to provide personalized recommendations, a recommender system must have ratings of each item for each user. The common solution, `explicit ratings', where users tell the system what they think about a piece of information, is well-understood and fairly precise. However, having to stop to enter explicit ratings can alter normal patterns of browsing and reading. Using `implicit ratings', a rating obtained by a method other than obtaining it directly from the user, has obvious advantages, including removing the cost of the user rating, and that every user interaction with the system can contribute to an implicit rating.
Current filtering systems mostly do not use implicit ratings, nor is the ability of implicit ratings to predict actual user interest well-understood. The Curious Browsers project studied the correlation between various implicit ratings and the explicit rating for a single Web page. A Web browser was developed to record the user's actions (implicit ratings) and the explicit rating of a page. Actions included mouse clicks, mouse movement, scrolling and elapsed time. This browser was used by over 80 people that browsed more than 2500 Web pages.
Using the data collected by the browser, the individual implicit ratings and some combinations of implicit ratings were analyzed and compared with the explicit rating. We found that the time spent on a page, the amount of scrolling on a page and the combination of time and scrolling had a strong correlation with explicit interest, while individual scrolling methods and mouse-clicks were ineffective in predicting explicit interest.
Mark Claypool, Phong Le, Makoto Waseda and David Brown, Implicit Interest Indicators, In Proceedings of ACM Intelligent User Interfaces Conference (IUI), Santa Fe, New Mexico, January 14-17, 2001.
Dave Brown and Mark Claypool. CURIOUS BROWSERS: Automated Gathering of Implicit Interest Indicators by an Instrumented Browser
Flyer (PS) for user study
Instuctions (GIF) for user study