Andreas Blumauer

Linked data based search: Make use of linked data to provide means for complex queries

Two live demos of PoolParty Semantic Integrator demonstrate new ways to retrieve information based on linked data technologies

data visualisation

Linked data graphs can be used to annotate and categorize documents. By transforming text into RDF graphs and linking them with LOD like DBpedia, Geonames, MeSH etc. completely new ways to make queries over large document repositories become possible.

An online-demo illustrates those principles: Imagine you were an information officer at the Global Health Observatory of the World Health Organisation. You inform policy makers about the global situation in specific disease areas to direct support to the required health support programs. For your research you need data about disease prevalence in relation with socioeconomic factors.

Datasets and technology

About 160.000 scientific abstracts from PubMed, linked to three different disease categories were collected. Abstracts were automatically annotated with PoolParty Extractor, based on terms from the Medical Subject Headings (MeSH) and Geonames that are organized in a SKOS thesaurus, managed with PoolParty Thesaurus Server. Abstracts were transformed to RDF and stored in Virtuoso RDF store. In the next step, it is easy to combine these data sets within the triple store with large linked data sources like DBPedia, Geonames or Yago. The use of linked data makes it easy to e.g. group annotated countries by the Human Development Index (HDI). The hierarchical structure of the thesaurus was used to collect all concepts that are connected to a specific disease.

This demo was developed based on the libraries sgvizler to visualize SPARQL results. AngularJS was used to dynamically replace variables in SPARQL query templates.

Another example of linked data based search in the field of renewable energy can be tried out here.


Andreas Blumauer

Do you like Google’s Knowledge Graph?

Semantic Enterprise Search enters the second phase.

Finally the Knowledge Graph has arrived in Europe: What has been provided on for the US-Market since May 2012, is now available also for most European countries. Search results are no longer only a list of documents (and advertisements) but also a mashup of facts, points of interest, events etc. referring to the search phrase.

For example, if the user is searching for ‘Wiener Philharmoniker’ (‘Vienna Philharmonic Orchestra’) a factbox including related searches is provided:

Do you like this rather new way of knowledge discovery? We do, except the fact that Google hasn´t properly explained to the audience which technology is behind the Knowledge Graph which is the Web of Linked Data aka the Semantic Web (Do you want to know more about the relationship between the Knowledge Graph and Linked Data? Click here).

But anyway, here are some benefits we can see, if search technologies make use of a ‘knowledge graph’, a ‘knowledge model’, a ‘thesaurus’ or generally spoken: Linked Data.

  • Facts around an object (or an entity) can be found nicely packed up to a dossier
  • Serendipity can be stimulated by ‘related searches’ which means: Users can discover the formely ‘unknown’ in a more comfortable way
  • Data from various sources can be pulled together to a mashup (e.g. ‘upcoming events’ could come from a different database than the basic facts of Vienna Philharmonic Orchestra)
  • Search phrases are well understood by the engine since they are based on concepts and not anymore on literals, e.g. if the user searches for ‘Red Bull Stratos’, also results for ‘Felix Baumgartner’ will be delivered
  • Search can be refined, e.g. if one searches for ‘Vienna’, a list of POIs will be displayed to refine the actual place the user is looking for

Now imagine you would have a search engine in your company’s intranet based on a knowledge graph which is about the enterprise you are working for.

Such an advanced search application would look like this:

  • Data streams and all kind of content from internal sources are nicely mashed with information from the web (e.g. from Twitter, Youtube etc.)
  • Search assistants are provided to help users to refine their information needs to make them more specific
  • Entities and their sub-concepts (e.g. subsidiaries of large companies or regions of countries) are nicely packed together to one dossier

The key question now is: “how to set up a customised knowledge graph for a certain company?”.

Corporate Semantic Web based applications can be realised on top of software platforms like PoolParty. They all have a customised knowledge graph in their core. This is always the basis for concept-based indexing of specialised content from a corporate intranet. The basic standard for this is SKOS which can be used together with advanced query languages like SPARQL. Such graphs can be used for semantic indexing but also to ask for relations like ‘is point-of-interest in’, ‘is event of’, ‘is related search for’ etc. This is the next-generation semantic search which help decision-makers, information professionals and all kind of knowledge workers to improve their work significantly.
One comfortable way to create customised knowledge graphs is to make use of Linked Data sources like Freebase (like Google does) or DBpedia. More details wanted? Take a look at the PoolParty approach for efficient knowledge modeling.
Andreas Blumauer

Free Webinar on Enterprise Semantics

PoolParty Team gave a webinar on November 28, 2012. We talked about scenarios and applications using semantics in enterprises. Some of the use cases we have discussed were:

  • Semi-automatic tagging of content (SharePoint, Confluence, …)
  • Semantic enterprise search (Mindbreeze, FAST, Exalead, …)
  • Linked Enterprise Vocabularies
  • Enterprise linked data integration (queries across Oracle databases and unstructured text)


We showed the latest developments of PoolParty platform and we gave insights how structured data from relational databases can be mashed with unstructured text when using linked data alignment. We also showcased how we mashed a large text corpus with statistical financial data on top of PoolParty and UltraWrap.

Andreas Blumauer

Seevl: Explore the cultural universe based on semantic web technologies

Just recently Alexandre Passant from DERI Galway went public with a new web service called seevl. First impressions after test driving the system reveal that the seevl team is keeping the promises they have made: “Seevl reinvents music discovery. We provide new ways to explore the cultural and musical universe of your favorite artists and to discover new ones by understanding how they are connected. In addition, we let you comment every piece of data about them.”

I was talking with Alexandre and asked a couple of questions:

Q: aims to offer a new way of music recommendations. What exactly can the user expect from it?
The main idea is to offer context around the recommendations, while existing systems are opaque, or rely on collaborative filtering techniques. So that a user know why he could / should like X if he’s browsing page about Y. We hope (and we’ve seen it from our user feedback so far) that it can help to discover new bands and hidden connections.

Q: Yes, indeed this is something new. Maybe for the typical users this could be too complicated. This brilliant feature should somehow be hidden – working just like a magic button?
So far, we include this in the “why is related” button, but we’re constantly working on the UI / UX. Also, we only provide text for now, but are working on dataviz interfaces.

Q: seevl offers for developers a Web API. It seems like you don´t use semantic web standards for that?
We use content-negotiation to provide machine-readable data for every page (search results, entity description, related artists, etc.). If by non-SW standards you mean non-RDF, indeed, we provide JSON instead of RDF/XML or N3, etc. But our JSON integrates URI that you can dereference and follows a similar approach than other existing RDF-JSON serialisation. So, why JSON you may ask. Because our developer target is music hackers, and all APIs from this community (, echonest, etc.) offer JSON, not RDF. Learning a new JSON schema takes 5 min, learning RDF takes much more.
But we believe that a JSON-RDF serialisation combines the best of both worlds. Actually, we could say we provide our data using standards (we’re giving back a graph that follows the RDF abstract model, with links to dereferencable URIS) but not in a (so far) standardised serialisation.

Q: I agree. But mid-term oriented I would go additionally for SPARQL. A lot of people learn how to SPARQL at the moment.
Yes, we have to measure the cost / ROI. Complete SPARQL can lead to complex queries, that’s why they are somehow hidden behind our search interface (that basically construct a controlled SPARQL query). But that could be something provided to advanced customers.

Q: is based on linked data sets like DBpedia, MusicBrainz or Freebase. Is seevl itself offering Linked (Open) Data? I can also see heavy use of the open graph protocol. How could a facebook application of seevl could look like?
Yes, we provide our data back at We’re using the Music Ontology and a bit of other models (FOAF, etc.). So far, the OGP markup is used for Facebook likes – but we are looking at other things that could be built on top of this.

Q: Which business model are you following? Can one integrate your service into his shop? would you offer this a cloud service? for how much?
We’ll have B2C (new features on the website are coming soon) and a B2B freemium model. We’re currently identifying how much calls we can support as part of the free-calls per day (so that will indeed be cloud-based, our architecture is on EC2). So, integration of our service / data in shop websites, etc. is definitely what we’d like to see and to feature in our upcoming app-gallery ! The only requirement for data-reuse is attribution and linking-back to the service.

Thanks Alex, and I wish you and your team all the best with!