KEYNOTES

Alfio Massimiliano Gliozzo: Beyond Jeopardy! Adapting Watson to new Domains using Distributional Semantics

Abstract: Watson is a computer system built to answer rich natural language questions over a broad open domain with confidence, precision, and speed.  IBM demonstrated Watson’s capabilities in an historic exhibition match on the television quiz show Jeopardy!, where Watson triumphed over the best Jeopardy! players of all time.  The new challenge for IBM is to adapt Watson to important business problems and to make this process scalable while requiring minimal effort. In this talk I describe the DeepQA framework implemented by Watson, focusing on the adaptation methodology and presenting new research directions, with emphasis on unsupervised learning technology for distributional semantics linking text to knowledge bases.

Bio: Alfio Gliozzo is a research staff member at the IBM T.J. Watson Research Center. He is currently a technical leader on the DeepQA team, coordinating a research team focused on unsupervised learning from text. At the same time, he is a key contributor of the Watson core technology for domain adaptation. He has been involved in both academic research and industry for 12 years, achieving a significant track record in delivering semantic technologies across different applications, patents and scientific publications.

 

Alan N. Shapiro: “Meaningful Information, Meaningful Lives: Principles of a Semantic Information Science”

Abstract: In the 1990s, cultural theorists who speculated about the implications of the Internet for society, education, interpersonal interaction and academic research tended to base their thinking on the assumptions of semiotics, or, in its most radical form, deconstruction. There was an emphasis on hypertext and hypermedia: the advancement of certain myths of the democratization of knowledge and the undermining of the authority of the author. Distributed systems like the World Wide Web and markup languages like HTML that figured prominently in the invention of the software layer of the Internet were a parallel development to the cyberspace theories within the semiotic paradigm. The driving forces of that initial decade of the Internet have left us with an essentially Semiotic Information Science: the study, design and implementation of communicating processes and relations – in a word, links – among nodes of information. In libraries and businesses, archives and museums, we catalog, index, manipulate, store and retrieve information – understood as little itemized signs or signals fed into or output from our glorious institutional systems of classification and collection. The more these signs circulate in our networks and are massively available in thin horizontal abundance, the further we sink into semiotic meaninglessness. Everything is connected but loses its depth and singularity. We are surrounded by mountains of information garbage (for example: tens of millions of web pages automatically generated by computer programs which say nothing more than – repeated twenty times – “this is a page about the subject of X”). We retrieve more and know less. We talk more and say less. But the paradigm shift to a Semantic Web and a Semantic Information Science offers the strong hope that we can move towards a science and society of qualitatively greater knowledge and intelligence. Of course, I am advocating an expansion of the meaning of Semantic Web from a set of standard data formats for including “semantic” content in web pages to semantics understood as the branches of linguistics, computer science and psychology that deal with meaning. I am especially interested in lexical semantics within linguistics. There are many other semantic subfields within linguistics, and there is also a semantics within semiotics, but those are different significations of the word semantics. One could say that Tim Berners-Lee, the inventor of the World Wide Web and the director of the World Wide Web Consortium, has himself shifted from a semiotic to a semantic approach to structuring the Internet, albeit much more via a technical than a cultural perspective. A Semantic Information Science will focus on the contexts that give meaning to words (as in linguistic lexical semantics), emphasize the ineffable and experiential qualities of “nodes of information” (as in psychological semantics), and deepen the meanings and interpretations of programming expressions (as in my proposed extension of computer science semantics). Semantic software (see the SBSGRID platform) will provide natural language access to databases, return answers to associative questions, bring together the flexibility of search with the precision of query, and contextually fathom the user’s needs. The more meaningful information of the Semantic Web and a Semantic Information Science will help us to “work, play, learn and care for our health differently” and give us more meaningful lives.

Bio: Alan N. Shapiro is a trans-disciplinary thinker who studied science-technology at MIT and philosophy-history-literature at Cornell University. He is the author of “Star Trek: Technologies of Disappearance”, a leading work in science fiction studies and on the conception of futuristic technoscience. He is the editor and translator of “The Technological Herbarium” by Gianna Maria Gatti, a major study of art and technology. He is writing books about the TV shows Lost and The Prisoner. He has been a practicing software developer and IT Consultant, and is working on the project of extending object-orientation in software development with “power to the objects”, integrating ideas from quantum physics and holistic biology  At his website “Alan N. Shapiro, Technologist and Futurist”, he has published 250 articles (by himself and others) about his new transdisciplinary worldview. He is recognised as one of the leading experts on the philosophy and cultural theory of Jean Baudrillard. Alan teaches seminars at the NABA Design University in Milano, at the Arts University in Berlin, at the University of Frankfurt am Main, and at the Design University in Offenbach. Recently he was a keynote speaker at the conference on Knowledge of the Future at the University of Vienna, at the conference on Information Management at the University of Amsterdam, at the IEEE conference on the Information Society in London, at the Share Art Festival in Turin, and at the symposium on hybrid art and science in Tallinn. In July 2012, he gave the International Flusser Lecture in Berlin.

Diane H. Sonnenwald: “Visioning the Future of Information and Library Science: Opportunities & Challenges.”

Abstract: The landscape of society continues to undergo tremendous change. Social challenges are more complex than ever before, often crossing multiple geo-political boundaries, creating new socio-economic boundaries and involving new relationships and interdependencies among actors.  Natural and human-made disasters continue to present multiple challenges linked to information access. Scientific instruments and sensors are producing new types and large volumes of digital data that require curation. Personal, cultural and organizational heritage digital data that allow for new ways of interacting with information and others, and new ways of understanding ourselves and our society are increasing. These trends present challenges and opportunities for information and library science. This talk will explore how information and library science is responding, and perhaps should respond, to these challenges and opportunities.

Diane H. Sonnenwald is professor for Information Science and Head of School at University College Dublin well known for her methodological investigations in new studies of information behavior. The method she developed for this area is called “Information Horizons”. Her collaboration with ISI and the Hochschulverband Informationswissenschaft (HI) marks the deepened relationship with the European Chapter of ASIS&T which began very successfully at the last ISI in Hildesheim.