Improve partial perception of SensoryMessage targets #52
Replies: 1 comment
-
This seems like a good place to drop the related historic archives on this topic as well for extra context. [May 2009]
If someone is making a pheromone or heatvision based sense, it should be properly done systemically, not limited to one mob in one area having access to it. What happens when another developer comes and adds an infravision? Now mobs that have heatvision don't see infravision events, even though they are arguably the same thing? There was no discovery mechanism for a suitable existing mechanism, since it wasn't added to an enumeration. A developer can search references and see how it's used after discovery. But that part really comes down to the IMO we can change the (And an enumeration should be made extensible so new values could be added/used without having to modify Core to do so.) As for complicated things like hivemind events: This is in no way a "basic" event. One probably should not use BasicSensoryEvent for purposes like orchestrating hivemind mobile AI. You're talking about a complex, custom system; Perhaps derive your own event types for that, a game-library level "global" event housing, and if it makes sense to have a string parameter there, knock yourself out. I see no reason for us to complicate BasicSensoryEvent with more than the enumeration for type though. We're not inventing new events (nor senses) at runtime, so using strings doesn't really make sense to me. It allows typo bugs and encourages non-discoverability IMO.
On SensoryEvents over multiple senses
Yeah I can see MessageStrength playing a key role here, and having racial modifiers and whatnot adjust things. I think blindness/deafness type effects should basically reuse an AlterSenseEffect with a max dampening value, while there could also be spells which give you a moderate temporary boost.
The code that takes
Somehow the EventProcessor would figure out the best matched sense among the player's current senses and display that message. Of course the messages needn't be the same per sense and often wouldn't make sense.
Another thing to consider eventually is the resolving of variables through the contextual string builder based on the final perception strength - IE if it's really dark and you can barely see someone's silhouette maybe it should say "You see someone enter the room" rather than "You see Bob enter the room"... just more food for thought. Actually an
The idea with senses is to allow the ability to have more realism with regards to the messages we display. For instance, if a character in the room turns on a new flashlight but another character is blind, we don't have to add crazy blindness logic to the flashlight functionality to make sure the blind person doesn't get any output... we just issue a
Yes that's why builders should always be able to use a simple version and only opt to go crazy with such cases if they choose to. The system should allow as much or as little detail/granularity as desired. Implementing multiple levels of sight-based message for different qualities of ending sight would be a matter of using the But Core code should endeavor to keep it easy to specify multiple sensory situations, and generally model Core |
Beta Was this translation helpful? Give feedback.
-
One thing that was a bit impacted recently by the large refactoring to target .NET Core and remove a bunch of dependencies (e.g. NVelocity) was the ContextualMessages often pass target objects that were entirely unused. A few areas that were a bit more context sensitive were simplified a bit. We didn't lose a lot of these cases but it did bring attention to how inconsistent we were, and that we don't seem to have a common target bar here to aim for.
To summarize what we're up against, often the SensoryMessage involves multiple targets, which may have different perception traits. For example, from the "knock" command today supplies a ContextualString like:
and then send the Request and subsequent Event with a SensoryMessage like:
var thisRoomSM = new SensoryMessage(SensoryType.Sight | SensoryType.Hearing, 100, thisRoomMessage);
While this is leaps and bounds above traditional MUD code which would typically hard-code handling of the messaging and any one-off interactions from the command (like spell effects like immobilization that prevent you from knocking) directly in the command code itself, making for a large mess... It still leaves some things to be desired. Namely, handling of partial perception scenarios. The code we have currently will raise an event that will produce just one of those 3 exact strings, regardless of your perceptive capabilities, like "a goblin knocks on the iron door" even if one of them is invisible, or you can hear the event but not see it (how do you know it's a goblin?), etc. Because the code considers it sufficient to either have Hearing or Sight in the same room to receive the full message.
In actuality, there are lots of permutations with infinite potential complexity, which could have better output. Some examples:
This is one thing that I think NVelocity was trying to handle (but did so only a little, and did so poorly) - being able to figure out if "a goblin" should actually be "someone" for the event receiver, etc.
How many of these scenarios can we support, with a given system, without getting too complex for the consumers of the contextual message code (like the Knock command in our example)?
Perhaps we could strike a good balance where we pass a few references to the common actors (the one doing the action, the one targeted by the action) and have a token replacement system that knows to print either the short name of the target or "someone/something" if ill-perceived by the receiver, and so on. Then allow the contextual string to have multiple states (honored in priority order that they are provided in) like "if heard, use these base messages... else if seen, use these base messages", and so on? I think it would help to try to pseudo-code some scenarios, but I'll have to come back to this later. Maybe this'll look scary and too complicated in the end. Being able to opt-in to complex sensory scenario handling may help, but if so, at least the Core should probably strive for consistency across the board (probably handling the best supported options wherever possible).
We should think critically and discuss how far we want to take this though. Maybe the existing system is "good enough" through v1.0 and we should constrain any improvements against complexity vs benefit analysis. Or maybe we can find a better bar, and file Issues to reconcile existing actions to meet that bar, and document some Best Practices here.
Beta Was this translation helpful? Give feedback.
All reactions