In this post you learn why accessing timely, relevant and reliable information is mission-critical to the electricity market exchange and how applying Linked Data principles and semantic technology to electricity data can make for a more efficient, reliable and sustainable electricity market.

When we plug into the giant grid of information we call the Web, we are used to the fact that whatever we do — visiting websites, downloading stuff or just chatting, voluminous amounts of data get exchanged. …


A series of blog posts focusing on major Ontotext’s webinars and how they fit in the bigger picture of what we do

This post continues the series of posts we started with At Center Stage: 2 Ontotext Webinars About Knowledge Graphs and Their Application in Data Management.

We want to give you the bigger picture of what we do and where Ontotext webinars fit into it — just a couple of webinars at a time. All of our webinars are available on demand. …


Keeping track of the research activities of several hundred faculty members proved to be a major challenge for one of our clients — a leading US children’s hospital. We solved the problem using semantic data integration and leveraging several public Linked Open Data sources, which we combined into a single knowledge graph to get a Doctor-360-degree view.

One of our clients is a major children’s hospital in the Midwestern United States. …


How the Semantic Web vision reincarnated in thousands of Linked Open Data datasets and millions of Schema.org tagged webpages. And how it enables knowledge graphs to smarten up enterprises data.

The Semantic Web, both as a research field and a technology stack, is seeing mainstream industry interest, especially with the knowledge graph concept emerging as a pillar for data well and efficiently managed. But what exactly are we talking about when we talk about the Semantic Web? And what are the commercial implications of semantic technologies for enterprise data?

The Semantic Web started in the late 90’s as a fascinating…


A new series of blog posts focusing on major Ontotext’s webinars and how they fit in the bigger picture of what we do

In the last few years, we’ve had so many Ontotext webinars on interesting topics, attended by an increasing number of people, asking more and more questions that we’ve decided to start a new series of blog posts dedicated to them.

The idea is to give you the bigger picture of what we do and where our webinars fit into it — a couple of webinars at a time. All of our webinars are available on demand. …


Human-computer Collaboration with Text Analysis for Content Management

The history of computing has largely been a history of humans forced to follow the whims of our machines. No humans used binary until Leibnitz. Even Leibnitz didn’t find much use for it. There were attempts at decimal computers in order to be closer to the numeral system used by humans but they didn’t last. We are forced to use binary computers, because computers use electricity and voltages can vary. To make more reliable computers, they were designed with two states, with (aka 1) or without (aka 0) voltage. …


The next generation data and content management technology heavily relies on diverse, complex and dynamic metadata. Knowledge graphs, backed by translytical graph databases and formal semantics, offer the solution. RDF-Star brings the simplicity and usability of property graphs without sacrificing the essential semantics that enables correct interpretation and diligent management of the data.

There are no easy answers in life or in Information Architecture. Design decisions come with tradeoffs. Relational databases (RDBS) have been the workhorse of ICT for decades. Being able to sit down and define a complete schema, a blueprint of the database, gave everyone assurity and consistency…


How a human curated body of data is used in AI to train algorithms for search, extraction and classification, and to measure their accuracy

We often want computers to do the tasks we give them the way people do. But, as Ontotext’s CEO Atanas Kiryakov often says, we forget that nobody nurtures the Artificial Intelligence (AI) systems for 7 years to learn how to walk, talk, count, read, write and, even more important, not touch hot stoves and avoid tilting full glasses and asking: Can I get 3 more cookies? …


Modelling the relationships within scientific data in an open and machine-understandable format leads to better science.

The Open Science (or Open Scholarship) movement has been gaining momentum, especially since the European Commission has committed itself to ensuring open access to all funded research in April 2016. Expensive paywalls used by publishers, restrictive usage policies by scientific literature sources, a lack of consistency in the formatting and data locked away by proprietary software have all had detrimental effects on the dissemination of scientific knowledge.

In addition, a number of other arguments have put pressure on governments to provide open access to…


How industry leaders are using Ontotext technology to discover new treatments and test hypotheses.

Question: Which of these two statements is true?

  1. A) The healthcare industry has the advantage of an enormous amount of data upon which to create hypotheses for new treatments.
  2. B) The healthcare industry has the disadvantage of an enormous amount of data upon which to create hypotheses for new treatments.

Answer: Both are true.

It is true that the healthcare sector, which includes hospitals, pharmaceuticals, and insurance companies, have an enormous amount of data. They have to because people’s lives are at stake. Results need to…

Ontotext

Providing a complete semantic platform for identifying meaning across unstructured data; Developer of GraphDB™, the industry leading RDF triplestore.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store