22 NOV 2017 by ideonexus

 Top-Down Engineering of AI

The philosophers’ fascination with propositions was mirrored in good old-fashioned AI, the AI of John McCarthy, early Marvin Minsky, and Allen Newell, Herbert Simon, and Cliff Shaw. It was the idea that the way to make an intelligent agent was from the top down. You have a set of propositions in some proprietary formulation. It’s not going to be English—well, maybe LISP or something like that, where you define all the predicates and the operators. Then, you have this huge database that ...
  1  notes
 
02 JAN 2011 by ideonexus

 Natural Language Processing vs. Semantic Web

NLP works well statistically; the SW, in contrast, requires logic and doesn't yet make substantial use of statistics. Natural language is democratic, as expressed in the slogan 'meaning is use' (see Section 5.1 for more discussion of this). The equivalent in the SW of the words of natural language are logical terms, of which URIs are prominent. Thus we have an immediate disanalogy between NLP and the SW, which is that URIs, unlike words, have owners, and so can be regulated. That is not to sa...
  1  notes

A short comparison of the difference between NLP and SW in terms of processing, algorithms, structure, and emergence. NLP is described as 'democratic', where the power of SW is that URIs 'have owners,' meaning they are a top-down construct. Perhaps this is the problem of the Semantic Web and why it may never catch on: the web favors emergent semantics and democratized development.

02 JAN 2011 by ideonexus

 Ontologies vs. Folksonomies

It is argued - though currently the arguments are filtering only slowly into the academic literature - that folksonomies are preferable to the use of controlled, centralised ontologies [e.g. 259]. Annotating Web pages using controlled vocabularies will improve the chances of one's page turning up on the 'right' Web searches, but on the other hand the large heterogeneous user base of the Web is unlikely to contain many people (or organisations) willing to adopt or maintain a complex ontology. ...
  1  notes

Ontologies provide structure and a standard for tagging and searching, while folksonomies provide for an emergent system for tagging things.

02 JAN 2011 by ideonexus

 The Importance of Time-Stamping to Relevance

Time-stamping is of interest because the temporal element of context is essential for understanding a text (to take an obvious example, when reading a paper on global geopolitics in 2006 it is essential to know whether it was written before or after 11th September, 2001). Furthermore, some information has a 'sell-by date': after a certain point it may become unreliable. Often this point isn't predictable exactly, but broad indications can be given; naturally much depends on whether the inform...
  1  notes

Time-stamping is a crucial function of semantic data. Some information grows less accurate over time, while a context is desired for other data.

02 JAN 2011 by ideonexus

 Assessing Trust, Subjective Logic, Real-Valued Triple

Another key factor in assessing the trustworthiness of a document is the reliability or otherwise of the claims expressed within it; metadata about provenance will no doubt help in such judgements but need not necessarily resolve them. Representing confidence in reliability has always been difficult in epistemic logics. In the context of knowledge representation approaches include: subjective logic, which represents an opinion as a real-valued triple (belief, disbelief, uncertainty) where the...
  1  notes

Need to follow up on these concepts, learn more about them for establishing the trustworthiness and quality of content.

02 JAN 2011 by ideonexus

 The Web is More Than Text

Nevertheless, the next generation Web should not be based on the false assumption that text is predominant and keyword-based search will be adequate for all reasonable purposes [127]. Indeed, the issues relating to navigation through multimedia repositories such as video archives and through theWeb are not unrelated: both need information links to support browsing, and both need engines to support manual link traversal. However, the keyword approach may falter in the multimedia context becaus...
  1  notes

All of our search technologies, semantic explorations, and other online conversations are all dependent on text, but they must grow to be able to read images, audio, and video as well.

02 JAN 2011 by ideonexus

 The Importance of Web Topology

Web topology contains more complexity than simple linear chains. In this section, we will discuss attempts to measure the global structure of the Web, and how individual webpages fit into that context. Are there interesting representations that define or suggest important properties? For example, might it be possible to map knowledge on theWeb? Such a map might allow the possibility of understanding online communities, or to engage in 'plume tracing' - following a meme, or idea, or rumour, or...
  1  notes

Mapping the web allows us to find patterns in it, with potential applications.

02 JAN 2011 by ideonexus

 Graph Theory Approach to Web Topology

Perhaps the best-known paradigm for studying the Web is graph theory. The Web can be seen as a graph whose nodes are pages and whose (directed) edges are links. Because very few weblinks are random, it is clear that the edges of the graph encode much structure that is seen by designers and authors of content as important. Strongly connected parts of the webgraph correspond to what are called cybercommunities and early investigations, for example by Kumar et al, led to the discovery and mappin...
  1  notes

The graph theory approach produces a model of the web that is like a bowtie, and filled with other bowties, like a fractal. There is an image in the original document of this phenomena.

02 JAN 2011 by ideonexus

 Information Retrieval is an Arms Race Between Algorithms ...

IR is the focus for an arms race between algorithms to extract information from repositories as those repositories get larger and more complex, and users' demands get harder to satisfy (either in terms of response time or complexity of query). One obvious issue with respect to IR over the Web is that the Web has no QA authority. Anyone with an ISP account can place a page on the Web, and as is well known the Web has been the site of a proliferation of conspiracy theories, urban legends, tr...
  1  notes

As the web gets larger and data grows more complex, less trustworthy in many regards, algorithms will need to grow more sophisticated to adapt to it.

02 JAN 2011 by ideonexus

 The Concept of Supervenience and the Web

One view is reminiscent of the philosophical idea of supervenience [168, 169]). One discourse or set of expressions A supervenes on another set B when a change in A entails a change in B but not vice versa. So, on a supervenience theory of the mind/brain, any change in mental state entails some change in brain state, but a change in brain state need not necessarily result in a change in mental state. Supervenience is a less strong concept than reduction (a reductionist theory of the mind/brai...
  1  notes

Where changes in one concept cascade into changes on another, but not vice versa.