Discussions with Professor Randy Goebel, the keynote speaker of JSAI2019

Q1) Hideyuki Nakashima
I believe that Abductive/Inductive logic programming combined with Deep Learning can supplement many of shortcomings of DL.
And you have correctly identified the challenge of synthesizing explanatory models from the computations that create a neural network representation of the n-dimensional labelled data. The key challenge will be to use semantic models of a domain (beginning with domain specifi ontologies) as the basis for defining an inductive space/domain in which rule-like models can be constructed.

Q2) Koiti Hasida
Probably the present neural nets cannot learn modular structure of knowledge, making it impossible for them to scale and to provide explanation. Any comments?
I think it is easy for most of us to understand that finding probability distribution gregularityh from n-dimensional labelled data is a great initial contribution to building scalable semantic models. We have to understand how to glifth the content of their distributional models to the level of the semantic vocabulary of the application domain. Indeed the challenge will be to find scalable efficient computational methods to integrate the existing spectrum of semantic modelling methods, including concept identification (coupled with domain specific ontologies), correlation and then causal relationships, and interaction processes to create and execute information experiments that resolve the contradictions that will arise.

Q3) Hiroshi Nakagawa at RIKEN AIP
Thank you for your insightful talk. After your talk, I asked a question " To implement instructability, interaction process between an explainer and an explainee is necessary, isn't it? The answer is yes, and your suggestion that for this interaction the debugging process of logic programming is hopeful and useful. Your answer is quite helpful for AI accountability, and I extremely appreciate.
I think your perspective is exactly correct. Most of the interaction between intelligent agents, including humans-humans, humans-machines, and machines-machines, are always about resolving ambiguity, arising largely from alternative interpretations of the semantics of knowledge.
In fact, interaction dialogues (and the models required to structure them) are an essential part of the architecture for all agentsf learning (humans and machines!).

Q4) Takafumi Koshinaka
Explanability and performance is often in a kind of trade-off relationship. If you want explanability, you need to accept some loss of performance. True?
No. This is a misconception propagated by deep learning people. They convolve or gmix uph two separate concepts. First, the degree to which a learned probability distribution from a finite set of n-dimensional labelled data is accurate is a different notion of gperformanceh than the question of the quality of an explanation. The misconception arises from a faulty belief that a trained neural network is a gbetterh representation of a finite set of domain data than a set of rules. However, the representation of a set of n-dimensional labelled data as a probability distribution is only a poor syntactic representation of a more elaborate semantic representation (which can be done in so many ways; from statements about probability of data interpreted as random variables, to default rules in logical representations).
Real domain-dependent causal rules, for example, will be not only a basis for explanation of the probability representation of a trained network, but will be broader than the small finite domain representation of that network, and help debug it (e.g., identify over fitting).

Q5) Y. Ohsawa
If AI for explanation, and expalnation of AI both stand, --- so is AI self-consistent and will run without a human?
In principle, we can already build systems that find minimal sets of consistent rules (e.g., consider how to create answer sets in the answer set programming of logic programming). So being able to identify what kinds of statements are compatible and incompatible is a foundational requirement for both science and for the development of AI knowledge architectures.
Finding consistent models or sets of rules (in any representation) is, I believe necessary but not sufficient. Knowledge which consistent set is appropriate to draw conclusions from is much more important. The inferences required to interact effectively with agents and the world depend on picking the best current partial models one has. The whole literature of default reasoning and belief revision is devoted to that.

Q6) Anonymous
There is no doubt explanations are important. However, how can we guarantee the gvalidity h of the explanations? Those are still abduction made by us.
If gvalidityh means that explanations are 1) consistent with existing knowledge/information/data, and 2) they are sequences of rational inferences that provide sound but perhaps defeasible chains of reasoning, then yes, the identification of sets of valid explanations can be created. Abduction is not gunsoundh if based on deduction from incomplete information, and confirmation of consistency of the information used to create the explanatory argument. Deduction and probabilistic inference from incomplete information are the best we can do, from a scientific viewpoint.

Q7) Anonymous
DNNs seem to be complex systems, similar to human brains, ecosystems, etc. How do you think science (=explanation) for complex systems is possible in general?
Science is about building models for observations, to be strengthened by making predictions. Some predictions are confirmed, strengthening the model c some are refuted, requiring model adjustments.
It really doesnft matter what the domain is c rather what is important is keeping the integrity of the tools for modeling, designing experiments, and confirming evidence or need to adjust a model.
Just to hit directly at deep learning and complexity; Kurzweil's gsingularityh is a kind of bad science that suggests the performance of a system is based on the number of active components it has. We have known this to be wrong for a long, long time.

Q8) Anonymous
gIntelligenceh doesnft mean only the activity glearning.h What is important is what kinds of events in the world to observe to understand the world.
Agreed. That is why, despite the incredible success and effectiveness of reinforcement learning on the domain of sequential decision making (SDM), merely learning how to optimize actions to maximize reward is not enough.
Learning can be both about learning principles of the world, and learning how to act in the world. Knowing what a cat is, is something different from deciding to avoid a black cat ;-)

Q9) Anonymous
Do you think explanation have to be represented in natural language?
Not necessarily; the kinds of explanation depend on the knowledge of the receiver or gexplainee.h Most of our teacher-student interaction is focused on language, but, for example, when a teacher is explaining logical deduction, it is better to use logical symbols and rules. The spectrum of how to make an explanation is as broad as the knowledge for each individual (which may include machines teaching machines, like AlphaGoZero being taught by AlphaGo).

Comment) Anonymous
Slide image and characters are so kawaii!!
Arrigatou gozaimasu

Comment) Anonymous
Thank you great session. I think that I can explain if you cut images into various sizes and evaluate the classification accuracy for each of the cut images.
Yes, that is what the designer of the convolutions of a CNN does. I think your intuition is correct. As the data becomes more complex (e.g., as in even the simplest of text data), the choice of subsets of data is much more complicated c e.g., the relationship between words is not just their proximity in a text.