Abstract

Versions

➤  Version 1 (2020-04-04)

Citations

John Ball (2020). . Researchers.One. https://researchers.one/articles/20.04.00003v1

    Reviews & Substantive Comments

    3 Comments

  1. Kulvinder PanesarMarch 2nd, 2021 at 05:06 pm

    Thank you so much for sharing your ideas and thoughts. This book has been a great and exciting read, as part of my ongoing research and development journey. My interest is in conversational AI – dialog systems, goal directed behaviour and RRG – where language is central tenant to my work, and you confirm my concerns about the hype that current statistical – deep learning language models are claiming. These include understanding the meaning of the utterance or statement and demonstrating high accuracy levels of a range of NLP tasks which are in many cases are standalone tasks and do not fit the challenge of generative grammatical responses in conversation. I agree there is some way to go - and you clealy demonstrate the gaps, need and a potential way forward. Thank you, John Ball and Pat Inc.,

    Dr Kulvinder Panesar 2/3/2021

  2. Mehdi MirzapourJune 9th, 2020 at 02:39 am

    The main contribution of the article is the introduction to the brain model, called Patom Theory (PT), that can support AI in a way that is effective at natural language understating (NLU). This, in principle, should lead to humanizing natural language understating in machines for AI.  Something that we all look forward to having in our system. The article explicitly discriminates two important notions right from the beginning: what a brain does vs. how the brain does it. This demarcation is important since PT is mainly responsible for the first, namely what a brain does. PT considers itself highly compatible with recent functional linguistics theories, in particular with Role and Reference Grammar (RRG). RRG as a linguistics framework models the world’s diverse languages in syntax, semantics, and discourse pragmatics. This paves the way for PT to be actualized as a real NLU system since RRG, as a research program for many decades, provides the linking algorithm between syntax and semantics in the context that PT implements with consolidation and semantic sets. It suggests a wise choice among different existing possibilities. The nice thing about the article is that it demonstrates many non-trivial, and yet linguistically insightful, examples that are all resolved with Pat’s pattern matching engine.

    Generally speaking, the idea of PT-RRG marriage seems promising for the NLU field. We expect to see efficient NLU applications/products gained by this symbolic-based PT-RRG approach in the future. Nevertheless, we should also consider some potential challenges that we may face while expanding PT-RRG-based NLU systems. Being prepared for the following challenges seems important from a scientific and strategic point of view:

    • Connectionism and symbolic approaches are not necessarily incompatible. One good example of a scholar who works on this issue is Gary Marcus. He argues that understanding the mind would require integrating connectionism with classical ideas about symbol-manipulation. This means that it is possible to build a symbolic approach on top of a deep neural architecture. Let us take an example: RRG needs some inputs such as different syntactic terms, semantic forms, macro roles, and operators. It is a better solution to get these inputs from a robust machine learning or deep learning approaches (with almost acceptable performance) instead of a dictionary-based approach that is introduced in the article. The dictionary-based approach is still valid for routine procedures like named-entity detection when it comes to the list of countries, in the last steps of the pipelines. We should consider that some tasks like Multi-Word Expression detection or Metaphor detection is not an easy task for purely symbolic approaches considering that 30%-40% of human languages are covered by these kinds of phenomena. Updating a dictionary with new observed cases is highly expensive and time-consuming.

    • Symbolic approaches, in general, are famously appealing in the beginning. The complexity of the system will increase as it starts to expand to handle the variety of real-world cases. This makes for two main drawbacks in symbolic systems during the expansion phase: (i) the cost of system maintenance will obviously increase, and (ii) one may be forced to enter ad-hoc solutions that affect the system integrity. These problems can teach us a lesson: generalizations for any sub-module in a symbolic system should satisfy valid scientific criteria. We should also monitor the system response time since this a very important factor for customer satisfaction with NLU systems. If the system performance and maintenance are not satisfactory some modules should be replaced with robust and fast solutions.

    • “What a brain does” is the main concern of PT. It is not so hard to imagine that RRG also has the same principle in linguistics as a functional theory. A natural question arises: why should a PT-RRG based system be so selective with its implementation strategy? One may argue that since an imaginary PT-RRG based system has no serious concern on “How a brain does it” we can choose any strategy that might be suitable including deterministic, machine learning, or deep learning methods, depending on the needs.

    • This seems to be a good idea if the author considers other alternative linguistic theories such as Yorick Wilks’ preference semantics for bridging with PT. A survey of different options and literature reviews would be a good way to improve.

  3. Beth CareyApril 4th, 2020 at 11:12 am

    Congratulations John Ball for engineering an NLU based on the new scientific paradigm of your brain model.

Add to the conversation

© 2018–2025 Researchers.One