Dynamic Interactive Visualization
Forum
Sketch and whiteboard
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Reactive vs. call semantics
Michael Shilman, 5 Aug 1999

I have some design issues for the sketch package. In the current design (and implementation) communication between a low-level recognizer and a high-level recognizer (and also the high-level => application) is achieved through an event-listener construction. Mouse events are passed to the low level recognizer, which in turn generates recognition events, etc. This flow of control and information gets passed up the tree in a "reactive" fashion.

It seemed like the right thing to do before, but the more I think about it, the more I think it can be done using a standard call semantics. Control would flow down the tree, and information would be passed back up the tree. The application would get a mouse event, ask its high-level recognizer to recognize the event, which in turn would ask a low-level recognizer, etc. The return values of these method calls would propagate information back up the tree.

I'm contemplating a redesign of the package and it seems like the call semantics are more intuitive/natural and require much fewer lines of code to achieve. On the other hand, the reactive semantics seem to be more flexible (fanout is possible, for example, though I can't think of where it would be useful in the case of recognition--debugging perhaps?). Some pros to the reactive approach:

  1. Consistent with the Java event model--e.g. a client would simply "register" with a recognizer to be updated with a much higher level of event than the mouse press.
  2. Consistent with the Beans model, which is could be a plus.
  3. Asynchronous recognition is a better fit with this model, though we are planning not to support that at this point in time.

I a discussion with Ali, he compared the two approaches to an OS "service" versus "event" model. He supposed that if the application was going to be dealing primarily with events, that it would be more consistent to have the recognition also provide an event-based interface. From my perspective, if the application is dealing with low-level mouse events anyway, it might be more consistent to abstract the recognition into a service so that the application doesn't have to deal with so many types of events.

I don't want this to be a distraction, and I think there are higher priority items to deal with, but I want to put this forward as a potential discussion topic if anybody has thoughts on this.

Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
Send feedback to cxh at eecs berkeley edu
Contact 
©2002-2018 U.C. Regents