@@ -329,6 +329,49 @@ explainer will write out a file with the statistics when either Soar exits or a
329
329
`soar init` is executed. This option is still considered experimental and in
330
330
beta.
331
331
332
+ ## Explaining Learned Procedural Knowledge
333
+
334
+ While explanation-based chunking makes it easier for people to now incorporate
335
+ learning into their agents, the complexity of the analysis it performs makes it
336
+ far more difficult to understand how the learned rules were formed. The
337
+ explainer is a new module that has been developed to help ameliorate this
338
+ problem. The explainer allows you to interactively explore how rules were
339
+ learned.
340
+
341
+ When requested, the explainer will make a very detailed record of everything
342
+ that happened during a learning episode. Once a user specifies a recorded chunk
343
+ to "discuss", they can browse all of the rule firings that contributed to the
344
+ learned rule, one at a time. The explainer will present each of these rules with
345
+ detailed information about the identity of the variables, whether it tested
346
+ knowledge relevant to the the superstate, and how it is connected to other rule
347
+ firings in the substate. Rule firings are assigned IDs so that user can quickly
348
+ choose a new rule to examine.
349
+
350
+ The explainer can also present several different screens that show more verbose
351
+ analyses of how the chunk was created. Specifically, the user can ask for a
352
+ description of (1) the chunk’s initial formation, (2) the identities of
353
+ variables and how they map to identity sets, (3) the constraints that the
354
+ problem-solving placed on values that a particular identity can have, and (4)
355
+ specific statistics about that chunk, such as whether correctness issues were
356
+ detected or whether it required repair to make it fully operational.
357
+
358
+ Finally, the explainer will also create the data necessary to visualize all of
359
+ the processing described in an image using the new ’visualize’ command. These
360
+ visualization are the easiest way to quickly understand how a rule was formed.
361
+
362
+ Note that, despite recording so much information, a lot of effort has been put
363
+ into minimizing the cost of the explainer. When debugging, we often let it
364
+ record all chunks and justifications formed because it is efficient enough to do
365
+ so.
366
+
367
+ Use the explain command without any arguments to display a summary of which rule
368
+ firings the explainer is watching. It also shows which chunk or justification
369
+ the user has specified is the current focus of its output, i.e. the chunk being
370
+ discussed.
371
+
372
+ Tip: This is a good way to get a chunk id so that you don’t have to type or
373
+ paste in a chunk name.
374
+
332
375
## Visualizing an Explanation
333
376
334
377
Soar' s ` visualize` command allows you to create images that represent processing
0 commit comments