How Brains Work: Patom Theory's Support from RRG Linguistics

    Conversation

    2 Comments

  1. Mehdi MirzapourJune 9th, 2020 at 02:39 am

    The main contribution of the article is the introduction to the brain model, called Patom Theory (PT), that can support AI in a way that is effective at natural language understating (NLU). This, in principle, should lead to humanizing natural language understating in machines for AI.  Something that we all look forward to having in our system. The article explicitly discriminates two important notions right from the beginning: what a brain does vs. how the brain does it. This demarcation is important since PT is mainly responsible for the first, namely what a brain does. PT considers itself highly compatible with recent functional linguistics theories, in particular with Role and Reference Grammar (RRG). RRG as a linguistics framework models the world’s diverse languages in syntax, semantics, and discourse pragmatics. This paves the way for PT to be actualized as a real NLU system since RRG, as a research program for many decades, provides the linking algorithm between syntax and semantics in the context that PT implements with consolidation and semantic sets. It suggests a wise choice among different existing possibilities. The nice thing about the article is that it demonstrates many non-trivial, and yet linguistically insightful, examples that are all resolved with Pat’s pattern matching engine.

    Generally speaking, the idea of PT-RRG marriage seems promising for the NLU field. We expect to see efficient NLU applications/products gained by this symbolic-based PT-RRG approach in the future. Nevertheless, we should also consider some potential challenges that we may face while expanding PT-RRG-based NLU systems. Being prepared for the following challenges seems important from a scientific and strategic point of view:

    • Connectionism and symbolic approaches are not necessarily incompatible. One good example of a scholar who works on this issue is Gary Marcus. He argues that understanding the mind would require integrating connectionism with classical ideas about symbol-manipulation. This means that it is possible to build a symbolic approach on top of a deep neural architecture. Let us take an example: RRG needs some inputs such as different syntactic terms, semantic forms, macro roles, and operators. It is a better solution to get these inputs from a robust machine learning or deep learning approaches (with almost acceptable performance) instead of a dictionary-based approach that is introduced in the article. The dictionary-based approach is still valid for routine procedures like named-entity detection when it comes to the list of countries, in the last steps of the pipelines. We should consider that some tasks like Multi-Word Expression detection or Metaphor detection is not an easy task for purely symbolic approaches considering that 30%-40% of human languages are covered by these kinds of phenomena. Updating a dictionary with new observed cases is highly expensive and time-consuming.

    • Symbolic approaches, in general, are famously appealing in the beginning. The complexity of the system will increase as it starts to expand to handle the variety of real-world cases. This makes for two main drawbacks in symbolic systems during the expansion phase: (i) the cost of system maintenance will obviously increase, and (ii) one may be forced to enter ad-hoc solutions that affect the system integrity. These problems can teach us a lesson: generalizations for any sub-module in a symbolic system should satisfy valid scientific criteria. We should also monitor the system response time since this a very important factor for customer satisfaction with NLU systems. If the system performance and maintenance are not satisfactory some modules should be replaced with robust and fast solutions.

    • “What a brain does” is the main concern of PT. It is not so hard to imagine that RRG also has the same principle in linguistics as a functional theory. A natural question arises: why should a PT-RRG based system be so selective with its implementation strategy? One may argue that since an imaginary PT-RRG based system has no serious concern on “How a brain does it” we can choose any strategy that might be suitable including deterministic, machine learning, or deep learning methods, depending on the needs.

    • This seems to be a good idea if the author considers other alternative linguistic theories such as Yorick Wilks’ preference semantics for bridging with PT. A survey of different options and literature reviews would be a good way to improve.

  2. Beth CareyApril 4th, 2020 at 11:12 am

    Congratulations John Ball for engineering an NLU based on the new scientific paradigm of your brain model.

Add to the Conversation

Abstract

The lack of an explicit brain model has been holding back AI improvements leading to applications that don’t model language in theory. This paper explains Patom theory (PT), a theoretical brain model, and its interaction with human language emulation.

Patom theory explains what a brain does, rather than how it does it. If brains just store, match and use patterns comprised of hierarchical bidirectional linked-sets (sets and lists of linked elements), memory becomes distributed and matched both top-down and bottom-up using a single algorithm. Linguistics shows the top-down nature because meaning, not word sounds or characters, drives language. For example, the pattern-atom (Patom) “object level” that represents the multisensory interaction of things, is uniquely stored and then associated as many times as needed with sensory memories to recognize the object accurately in each modality. This is a little like a template theory, but with multiple templates connected to a single representation and resolved by layered agreement.

In combination with Role and Reference Grammar (RRG), a linguistics framework modeling the world’s diverse languages in syntax, semantics and discourse pragmatics, many human-like language capabilities become demonstrable. Today’s natural language understanding (NLU) systems built on intent classification cannot deal with natural language in theory beyond simplistic sentences because the science it is built on is too simplistic. Adoption of the principles in this paper provide a theoretical way forward for NLU practitioners based on existing, tested capabilities.

Versions

➤  Version 1 (2020-04-04)

Citation

John Ball (2020). How Brains Work: Patom Theory's Support from RRG Linguistics. Researchers.One, https://researchers.one/articles/how-brains-work-patom-theory-s-support-from-rrg-linguistics/5f52699d36a3e45f17ae7e6a/v1.

© 2018-2020 Researchers.One