Ok, so I am asking this generally but this could also be used for something besides Pandorabot. How can we send the unrecognized commands to a Pandorabot to get the responses from it? I know there has to be some kind of priority set so the VC commands are checked first and then the spoken text is passed along to the bot and the response returned and spoken as TTS.
1. Is this possible?
2. If so, how would this have to be structured to work?
I have some extensive AIML files I would like to mess with so I am interested in VC checking the bot when it can't find a match for what I asked. I realize this requires some kind of free dictation and the easiest way is to create a command with a prefix word and then the dictation so it can pass the text string to the bot. I understand all that but wanted to have this more seamless.
Is there a way for it to always listen for dictation and match the commands it finds defined in VC first then refer rest to pandorabots? That may be a better way to put it?
Thanks for any input or direction.
Also, is there any way to get Program AB to run with VoxCommando? I was thinking of customizing the AIML responses so they included some of the VC commands in them (i.e. var.Variable) or run some actions. I see Program AB supports the new AIML 2.0 standard and ALICE 2.0 which already has some really great features built in. Would save a ton of time in developing the personality of my smart home if I could use the AIML scripts. Pandorabots has changed everything and the new Developer API is not available yet publicly. I am thinking if I can have my own mini pandorabots like server running on my VC system, and if VC can interact with it, that would be ideal. Here is what I am referring to by the way.
https://code.google.com/p/program-ab/