Semantic Web Company and its PoolParty team are participating in the H2020 funded project ALIGNED. This project evaluates software engineering and data engineering processes in the context of how these both worlds can be aligned in an efficient way. All project partners are working on several use cases, which shall result in a set of detailed requirements for combined software and data engineering. The ALIGNED project framework also includes work and research on data consistency in PoolParty Thesaurus Server (PPT). Continue reading
Linked Data is evolving fast. A huge amount of RDF data is available and ready for exciting new applications. Unfortunately, the bottleneck is still the availability of Semantic Web user front-ends which demonstrate the power of linked data. To a certain degree BBC Music beta is the first commercial platform which makes heavy use of linked data. With Parallax David Huynh has shown that one of the most interesting semantic web applications can be built around browse and search applications which offer tools for doing complex search queries.
Andreas Blumauer from Semantic Web Company (SWC) talked with David Huynh, “Interaction Scientist” at Metaweb, the company which developed Freebase, an “open, shared database of the world’s knowledge”.
David: My official title at Metaweb is “Interaction Scientist,” and so my main focus is coming up with novel interaction designs for Metaweb’s platform and products, and prototyping them to some extent to evaluate their effectiveness. Parallax was one such prototype that has gathered much excitement within Metaweb and the Semantic Web community at large. And the Freebase query editor 2.0 shows my interaction designs at the other end of the spectrum – targeting developers rather than just end-users.
I’ve also learned that data-centric user interfaces and interaction designs can only be as good as the data allows them to. So I am also dedicating some of my time toward analyzing the data we have and improving its quality so that I can design even better interactions.
SWC: With Parallax you have introduced a new way to search and explore data: Could you explain the “set-based browsing paradigm”?
David: In the browsing paradigm of the original Web, while looking at a web page, you can only click on one hyperlink to get to one other web page. But in a lot of cases, the hyperlinks on that web page can be grouped into different groups based on what they mean to the human reader: these are the links that lead to reviews, these are the links that lead to authors, these are the links that lead to vendors, etc.
Now if the computer actually knows what these links mean, then you can tell it to follow several of those links that mean the same thing: follow all the links that lead to authors. Think of it as powered browsing: the computer does the work of following several similar browsing paths at the same time – going from a set of things (web pages or data entries) to a similarly related set of things – and making all of that information available for your perusal in one shot. It is a paradigm shift compared to how we browse the Web today. And it’s only possible when the computer is capable of telling which link is similar to which other link. And that capability, in turn, will be made possible by the Data Web.
(See this unpublished paper which goes into depth about this concept)
SWC: Linked Data is evolving fast. A huge amount of RDF data is available and ready for exciting new applications. Unfortunately, one bottleneck is still the availability of Semantic Web user front-ends which demonstrate the power of linked data. Do you think, that the Semantic Web is rather a server-technology than an end-user experience?
David: I have never thought of the Semantic Web as either a server technology or an end-user experience. I only care about usefulness, and then a matching amount of usability to make that usefulness accessible to people, especially those without Computer Science expertise.
I find that it’s so much easier to explain to people and get them excited about “immediate, personal, local benefits” of a particular technology than about “long-term, communal, global benefits” of a vision. For most people, the former must be experienced and felt often before the latter can appear vaguely appealing enough to call for actions. I’m lazy – I don’t like to spend efforts convincing people of visions; I only want entice people into using the tools that I have created.
So if Parallax is considered a success, it is so not just because of its technologies and research contributions, but also because the accompanying screencast explained it in a way that people who cared nothing about the Semantic Web could understand why Parallax would be useful to them. This was achieved by pointing out limitations of existing web technologies as already experienced and understood by a lot of web users, and then illustrating concretely a possible solution enabled by data web technologies.
Perhaps I could venture further and say that the dichotomy of server technologies and end-user experience is what’s holding back Semantic Web user interface efforts. For those who don’t have expertise in design, it is a comfort to think that once the back-end technologies are solid, then it’s just a matter of putting on some polishes, a.k.a. user interfaces from their point of view, to make the whole package appealing. This approach is wrong. The user interface design must inform the back-end design. Otherwise, the user interface will almost always reflect the internal system model, and that’s usually very dissonant with how users think and behave. Recall all the Semantic Web interfaces you have seen that force users to think in terms of triples or of raw URIs. Those were made by starting from the data model, not from user needs.
SWC: Quite often I hear people saying: Where is the Semantic Web? – I still can´t “see” it! How could the linking open data community make use of such user interfaces like Exhibit, Piggy Bank or Parallax? Is the set-based browsing paradigm a universal way to browse linked data or just one possible way?
David: My research prototypes embody a number of UI ideas that are quite transferable to other platforms. Most of my code is open source, too. This, by the way, is rarer than it should be: research prototypes often fall apart as soon as, or even sooner than, the relevant research papers get presented at conferences, and research code rots rather than gets offered free for reuse. This is sad, because reusable data needs reusable code to proliferate even more widely, but there is no reward system for making research code reusable, or for keeping research prototypes running. So perhaps people can’t “see” the Semantic Web because research prototypes are not presented in appealing and comprehensible ways, and they break down and disappear too quickly.
Regarding the set-based browsing paradigm, it is most certainly not the only way to browse linked data. It is just the first good one that came to my mind, around 2005. But it’s not until 2008 that I actually got around to implement it for real. One of the factors so important in its feasibility is the quality of data in Freebase, compared to other data sources that I had access to. Even the simple fact that a lot of Freebase topics have images makes Parallax look a lot more interesting and useful. People like to see pictures rather than raw URIs. And the diversity of types of data helps illustrate the browsing paradigm of Parallax – that ability to shift focus from one set of things to another set of things, even across very seemingly unrelated domains of information, such as from politicians to their celebrity friends in the movie industry.
So, perhaps one of the main challenges in adopting Parallax ideas on any arbitrary RDF data set is curating the data sufficiently for the purpose of presenting it. In fact, if you don’t know how some data is to be presented and used, there’s no way for you to determine if that data is of sufficient quality. User needs and interface designs drive back-end implementation and data curation, not the other way around. It’s a simple idea, really, but it can be hard to adopt if one is fixated on data alone.
SWC: Do you plan new versions of Parallax? When will it become part of Freebase or of even more Linked Data Sources?
David: I’ve done a few further experiments with the ideas in Parallax, but they are not ready for public use, yet. Freebase data makes my job much easier by allowing me to focus mostly on interaction designs rather than mostly on data quality, or rather, fighting the lack of data quality, for the purpose of presenting it. So I’ll start with Freebase data and we’ll see where it takes me.
SWC: What else are you working on at the moment?
David: As mentioned briefly earlier, reusable data needs reusable code to proliferate widely. That gives you a hint at an effort that I’m involved with.
SWC: Many thanks, David!
Related articles by Zemanta
Yesterday we dealt with reports, user interaction and interface questions, today is usage data model day (or morning) in the KiWi – Knowledge in a Wiki – Project. Usage data model means that it is concerned with an abstract conceptualization of the data as perceived by the user (and not by the developer/implementer) – at the same time, it is not immmediately concerned with the visualization of data on screen. FranÃ§ois Bry gave us an overview of the proposed core concepts and objects which are currently: content item, tag (and tagging), link, rule, user, and access right.
There is no need for me to repeat his full presentation, as FranÃ§ois had already in advance made his presentation available on the KiWi-project wiki. Nonetheless, I’d like to highlight a few aspects:
A content item is to be understood as a slight generalisation of a wiki page: Every wiki page is a content item, but not every content item is a wikipage, and content items that are no wiki pages are part of a wiki page. This could include, for instance, media content such as pictures, diagrams or tables. This modularization (content items within pages) meets the demands of the proposal that Kiwi-pages must be composable.
Consequently, not only wiki pages but content items too must be taggable (which takes us to: tagging). Furthermore, it was proposed to make a distinction between atomic tags (short; consisting of a tag name and an associated content item instead of a description) and structured tags (that are made up of atomic tags), as well as between explicit tags (that are applied by users) and implicit tags (that are generated on the basis of rules that have been defined by users).
To illustrate this distinction, I’ll paste in a few illustrating explanations from FranÃ§ois’ wiki report:
The tags assigned to the content item of an atomic tag T can be seen as tags assigned to the atomic tag T itself. Tagging of tags in this way can serve, for example, to distinguish between the atomic “hotel” in English and the same atomic tag “hotel” in French or to group or classify tags. […] A structured tag is build up from atomic tags. […] Examples of structured tags are as follows:
A heated debated ensued (which I quite like, because that is the point where our own, yet unchallenged assumptions are exposed), in particular with regard to the implementation of structured tags: Wouldn’t that mean to raise the cognitive barrier too high if users were required to enter complicated tags?
Much was clarified with the agreement that users may use structured tags, but that this wouldn’t be a requirement. Using complex tags (e.g. a structured tag that includes dates or deadlines) might make sense to a particular set of users (e.g. project managers in the Logica use case) – and whether a software feature is going to be used (successfully) or not is primarily depending upon the question whether the user sees a benefit in it or not. Also: The concept of structured tags within the data model does not yet say anything about the way they will be represented on screen – in most cases, users won’t see a hotel(location(downtwon)) spelled out.
On to the coffee break!