— AGENTS —
This is simultaneously the game I am most proud of and the game I feel needs the absolute most work to make it truly acceptable as a “real” game. The concept : A completely voice-controlled audio game played through your Android phone. You call your special agents and guide them around the enemy compound, to fulfill their missions
— ^^ Video illustrating gameplay ^^ —
As you can see, you speak to your agents and they respond to your questions and commands … when they are understood.
But what about when they aren’t?
This happens a lot, I think. I’ve heard a lot of people say the recognition doesn’t work for them. Unfortunately, they can never tell me what they say, or what the phone hears. Do they have a strong accent? Are they in a room with a lot of conversation? Is there a song in the background? These things could all interfere with the speech recognition.
Therefore, I’m very strongly considering releasing an update to the game that includes analytics. I would want to store, send, and study the following:
1. Which agent is on the phone
2. Which location they are in
3. What the phone thinks you said
4. What command your speech interprets into
Since this would be part of debugging I feel that this would not be breaking the rules of the Jam to issue an update to do this… But do you agree? If I do incorporate analytics, would you have concerns about the amount of data sent costing you money on your data plan? Any other concerns?
If you want to give the current build a try, before I make any final decisions on analytics, here is the submission page.