The 24th International World Wide Web Conference in Florence, Italy on May 18th

Call For Papers

Effective and reliable evaluation lies in the heart of all empirical research and engineering disciplines. It is no exception for some of the most important services on the Web, such as advertising, recommendation, search, and social networks. The most popular ways to measure the performance have been standard offline metrics like area under curve (AUC), root mean squared error (RMSE), and normalized discounted cumulative gain (NDCG), etc. But when these systems are deployed, the ultimate performance of interest can be very different, examples being click-through rate, revenue, and user satisfaction. Such a discrepancy makes online experiment critical for evaluating a Web-based system.

Recently, in order to reduce cost in running online experiments, reliable offline evaluation has attracted increasing interests in the industry with exciting empirical success. We believe it is now the right time to bring together researchers from both academia and industry to review existing findings, identify open problems, and brainstorm about future directions. Given the fundamental role evaluation plays in most Web-based services, such a workshop is expected to attract many audience, and have both immediate and long-term influence. The goal of the workshop is to provide a forum so that industrial practitioners can expose real-world challenges and share practical experiences, academic researchers can popularize state-of-art research, and collaboration between the two can be fostered.

We welcome submissions related to the main themes of the workshop. Topics of interest include, but are not limited to, the following:

* Classic offline evaluation methodologies of systems, especially those based on standard metrics such as RMSE and NDCG.

* Online controlled experiments such as A/B testing, interleaving, adaptive sequential experiments, etc.

* Offline evaluation based on direct user modeling, counterfactual analysis, etc.

* Practical applications and lessons related to evaluation on the Web.

Submission Instructions

Submissions must follow the ACM format, and should be in PDF format. Formatting instructions and materials can be found at WWW2015 website - paper submission requirements. Each submission should not exceed 6 pages, including references and appendices. Please make a submission through https://cmt.research.microsoft.com/EVAL2015. The review process is single-blinded and the authors need not remove their names from the submissions. All accepted submissions will be presented at the workshop, in the form of posters or talks. Furthermore, accepted papers may appear in the companion volume of the WWW2015 proceedings, published by the ACM and included in the ACM Digital Library, if the authors desire. At least one author of each accepted paper will have to register when submitting the camera-ready version.

Important Dates

Submission deadline: Jan 31st, 2015 (23:59 Hawaii Standard Time) - extended

Acceptance notification: Feb 22, 2015

Camera-ready deadline: Mar 8, 2015

Submission website : https://cmt.research.microsoft.com/EVAL2015