Martin Kaltenböck

GBPN Knowledge Platform using Semantic Technologies and Linked Open Data launched

The brand new web based GBPN Knowledge Platform has been launched on 21 February 2013. It helps the building sector effectively reduce its impact on climate change!

It has been designed as a participative knowledge hub and data hub harvesting, sharing and curating best practice policies in building energy performance globally. Available in English and soon in Mandarin, this new web-based tool of the Global Buildings Performance Network (GBPN) aims to stimulate collective research and analysis from experts worldwide to promote better decision-making and help the building sector effectively reduce its impact on climate change. To sustain and accelerate change in the building sector, the GBPN encourages open and transparent access to good quality and verifiable data. The data can be used and re-used in HTML, PDF and machine readable raw data (CSV) formats – provided by a Creative Commons Attribution (CC-BY 3.0 FR) license.

The GBPN Knowledge Platform is built on Drupal CMS and seamless connected with the PoolParty Semantic Information Management Platform of Semantic Web Company. Thereby this knowledge platform makes use of semantic technologies and Linked Open Data (LOD) principles and techniques under the hood. A lot of the available data of the various GBPN tools is provided as (linked) open data under a Creative Commons Attribution license. The Semantic Web Company is responsible for conceptual design and technical implementation of the GBPN Knowledge Platform.

As follows an overview and description of the most important features, tools and services of the information management system.

Continue reading

Martin Kaltenböck

José Manuel Alonso: “If you want to scale up, you should consider LOD”

José Manuel Alonso has been working for W3C and CTIC in many open data projects. At the Web Foundation he promotes and supports (linked) open data in developing countries. Martin Kaltenböck from SWC talked with José about ongoing activities in the area of Open Government Data.

Open Data is a powerful worldwide movement these days. Regarding open data projects in developing countries and in high industrialised countries (Europe, US, Australia et al) where do you see the main differences – regarding organisational – cultural – technical issues?

We conducted feasibility studies in Ghana and Chile several months ago, are supporting the Ghana government on the development of its national initiative and have visited and have engaged in Open Data discussions with many other countries in Africa, Latin America and Asia.
The situations are quite diverse and can vary significantly from country to country. It is always difficult to generalize, but I think there are a few important differences that can be highlighted (in no particular order):

  • The amount of information available in digital form is generally much lower
  • The IT infrastructure is yet to be fully developed or under development
  • The capacities on the government and civil society side have to be improved
  • The mobile phone is the main device to access information but data connectivity is still scarce, only available in the big cities and not at all in the rural areas
  • Digital literacy related issues have to be seriously considered and addressed
  • Multilingualism is an important factor, as there are dozens of dialects being spoken in many countries

Said all of the above, I would say that there are also quite a number of commonalities such as privacy and security concerns, the resistance to change but also the existence of champions within government, and the interest and willingness in civil society, that is already producing a number of interesting applications.

You are also very familiar with the concept of Linked Open Data (LOD) – where do you see the main benefit in using LOD – where do you think are the main challenges – where the main obstacles?

Having managed a few projects achieving 5-star open data, I’ve learned a thing or two about the pros and cons. I’ve been saying consistently that there are a few important issues:

  • There is still little knowledge about LOD out there and it is perceived as too complex
  • The demand for LOD is, hence, very low
  • The tooling is not powerful enough yet, specially when compared to XML tooling and others
  • The modeling part is very tough

People are used to work with XML and Web Services and believe that anything along this line such as REST+JSON fulfils most expectations and needs. But this is not fully true. In my opinion, the power of LOD resides on the linking part more than anything else. Combination of data from disparate sources using RESTful techniques is much more difficult while it’s a natural fit for LOD.
My experience tells me that for dealing with few and simple datasets, investing in LOD is not really needed, but if you want to scale up and, specially, if you want to link and integrate, then you should consider LOD. It is generally a bigger investment but it pays back for interlinking big volumes of information, facilitates re-use in multiple formats, and can get very powerful when using SPARQL appropriately as it allows access to the whole underlying knowledge base.

Where do you see the main differences regarding effort of publishing and benefit in re-use (or the re-use itself) between Open Data and Linked Open Data?

I would say that the main difference here is between using the Web as an archive for files and using the full potential of the Web. If one publishes hundreds of spreadsheets on the Web using an open format and license, he is already doing Open Data, but more than using the Web, he is going back to the FTP days. And that is not too different from giving away a USB stick with the files. We can do much better nowadays.

The often cited Tim Berners-Lee’s 5-star scale is a good reference here. The higher you can achieve on that scale, the more power of the Web you are using, the more you are facilitating reuse.

Are there differences regarding the use of LOD principles and technologies between developing countries and industrialised countries in your opinion? For example: does it make sense to start an Open Data Initiative in a developing country using Linked Open Data from the scratch?

All the issues with LOD I mentioned above apply and are even more strongly found in the developing world. I think we should take a step by step approach and start going from no data to some-star data in the very near term, lower the barriers one by one and start to building capacities in government and civil society but always with Web architecture principles in mind.
We will have to address the specificities of the developing world. For example, given that the LOD community is relying more and more on cloud-based options, on centralized data stores that require stable high-speed internet, how would one deploy a LOD solution in a country where clients (computers/mobile phones) have limited resources (disk, cpu) and where connectivity is unstable and with low-bandwidth? We’re participating in a worskhop to explore these issues.

This does not mean that LOD is completely ruled out from the beginning. As I pointed out before, there are cases on which it can be extremely useful and powerful and in those, we intend to accelerate adoption, likely piloting and building capacities as a first step.

Could you please tell us a few words about the Web Foundation?

The Web Foundation was launched by the inventor of the Web, Sir Tim Berners-Lee, in 2009 to address global challenges by connecting humanity and empowering individuals through an increasingly inclusive and powerful Web. More on the vision of the Web Foundation at:
http://www.webfoundation.org/vision/

Jose, many thanks for this interview. It seems that there is a quick progress in open data in developing countries as well as there are different requirements there to be taken into account in comparison to open data projects in Australia, the US or in Europe! Also the potential of Linked Open Data seems an interesting point for these countries!
We are looking forward to staying in touch with you on this in the future and wish you all the best for your future work in this area!

Enhanced by Zemanta
Andreas Blumauer

Semantic Web Company and punkt. netServices have merged

We are pleased to announce that two companies which have had already a significant standing within the European Semantic Web scene, are now acting under one brand. The long lasting expertise in developing, programming and integrating linked data technologies of punkt. netServices and Semantic Web Company’s consulting expertise have merged under the resulting label Semantic Web Company.

In 2004 Semantic Web Company was founded as a spin off of punkt. netServices to bring the semantic web and linked data technologies closer to the needs of companies, consumers and the government sector. We have done a lot of basic research those past years, as well as project-pioneering with prospective customers and partners. Finally we have consolidated our knowledge and skills in that field. What was avantgarde in 2004 now has become bleeding edge technology in present days. A good moment to join efforts and bring together the two sisters.

With the new Semantic Web Company, you can count on a team of 20 experienced experts from the areas of knowledge management, enterprise software architecture, search engines, collaboration software, agile web development and – last but not least – the semantic web. We are a powerful partner when it comes to realise enterprise-ready solutions. An enlarged company needs more space, so find our new headquarter on lovely Mariahilfer Street in Vienna in a building designed by famous Austrian architect Adolf Loos.

Read more about our goals and visions online on our brandnew website or get in touch with our team on-site, joining one of our monthly Open House Meetings.

Links:

Thomas Schandl

Webinars about Business Use of Semantic Technologies

The Semantic Web Company created a series of online seminars (aka webinars) for you to acquire basic and practical knowledge about methologies, technologies and standards of the Semantic Web. In 90 minute sesseions we will cover the business aspects of topics such as content engineering, Knowledge Management, business intelligence, e-Business and more.

RDF Exit

In order to allow for a high level of interaction, the attendance is limited to ten participants and ample time for questions and discussion with our experts is designated. Each webinar works as a stand-alone module, so you can pick and choose some of them or book the whole series of 6 webinars.

We’ll kick off with a session about Semantic Wikis on Thursday 22nd of October. A German language version will be held at 9 a.m., alternatively you can atted an English version at 6 p.m. CET.

Each Thursday we cover a different topic such as Semantic Search, Corporate Thesaurus Management, Text Mining on the Corporate Semantic Web, Linking Open Data and Semantic Advertising.

In order to participate you only need broadband access to the internet, Windows or a Mac and a fairly up-to-date browser. For detailed system requirement see the webinar overview.

We hope to talk to you in one or more of these sessions!