Knowledge graphs (KGs) provide a useful representation format for capturing complex knowledge about an information domain, with rich logical descriptions available for defining the relationships between entities. Separately, semantic vector spaces (SVSs) capture the relative meanings of terms based on their actual usage within a dataset and allow useful operations for exploring the relationships between these terms. Combining KGs and SVSs via knowledge graph embedding (KGE) enables further analysis tasks to leverage learned semantic vectors to gain additional insights. Therefore, KGE represents an interesting and potentially powerful tool for identifying emergent or unexpected behavior, or for seeking previously unaccounted for relationships, event, and groups. In this work, we report on the state-of-the-art in KGE. We describe the operational benefits that can be gained from this approach and the considerations that apply for observational ontologies that describe a complex, untrusted, time-sensitive, and rapidly-evolving environment. We suggest several promising avenues for future research in this context.
KEYWORDS: Data modeling, Artificial intelligence, Process modeling, Ecosystems, Machine learning, Data archive systems, Roads, Data processing, Systems modeling, Video
Machine Learning systems rely on data for training, input and ongoing feedback and validation. Data in the field can come from varied sources, often anonymous or unknown to the ultimate users of the data. Whenever data is sourced and used, its consumers need assurance that the data accuracy is as described, that the data has been obtained legitimately, and they need to understand the terms under which the data is made available so that they can honour them. Similarly, suppliers of data require assurances that their data is being used legitimately by authorised parties, in accordance with their terms, and that usage is appropriately recompensed. Furthermore, both parties may want to agree on a specific set of quality of service (QoS) metrics, which can be used to negotiate service quality based on cost, and then receive affirmation that data is being supplied within those agreed QoS levels. Here we present a conceptual architecture which enables data sharing agreements to be encoded and computationally enforced, remuneration to be made when required, and a trusted audit trail to be produced for later analysis or reproduction of the environment. Our architecture uses blockchainbased distributed ledger technology, which can facilitate transactions in situations where parties do not have an established trust relationship or centralised command and control structures. We explore techniques to promote faith in the accuracy of the supplied data, and to let data users determine trade-offs between data quality and cost. Our system is exemplified through consideration of a case study using multiple data sources from different parties to monitor traffic levels in urban locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.