Re: Process Control Extensions - kill/reset/throw scheduling, exception handling

From: Philipp A. Hartmann <philipp.hartmann@offis.de>
Date: Fri Aug 27 2010 - 14:36:00 PDT

Stuart, Bishnupriya, All,

sorry, that I chime back in so late in the discussion, but I have been
quite busy and out of office in the last couple of days.

Please find some comments interspersed below.

On 25/08/10 23:05, Stuart Swan wrote:

> Again, I think the "immediate semantics" for kill/reset/throw as specified
> in the Cadence proposal is the least surprising from the POV of the user.
> I haven't heard anyone disagree with this point. Immediate semantics
> are also (most likely) consistent with modeling languages such as UML state diagrams, etc.

I agree, that users (including me) expect a kill/throw/reset to have an
"immediate" effect on on the target. But I have indeed been quite
surprised, that this extends to these operations being "blocking", i.e.
are obliged to be completed when the calls return.

 - SystemC has simultaneity in terms of simulation time (and deltas)
   and is scheduled explicitly with a few modelling constructs. Why
   add implicit exceptions for kill/throw/reset?

 - sc_spawn is not "immediate" in this strong sense. The new process
   is only scheduled to run in the current evaluation cycle instead
   of being started already, when sc_spawn returns.

 - Usually, synchronous cancellation has to be set up explicitly
   (c.f. pthread_cancel).

  In short: I would have expected an effect in the same evaluation cycle
without further guarantees. If a user is interested in the completion
of a kill request, why not use a wait(h.terminated_event()) for this?
Maybe a reset_event() could be useful as well.

  Moreover, in the currently available OSCI-internal implementation,
kills issued from within a method process are not synchronised and
return before the kill/reset/throw having any effect (see attached
example). Otherwise, a method process could suddenly be blocked in the
middle of its execution, which is certainly more surprising. So that's
an inconsistency (or a bug in the implementation).

> I can think of numerous semantic issues if we don't have immediate semantics - e.g. what
> happens if two perpetrators try to throw an exception in a victim process in the same
> evaluation cycle. Do we guarantee which throw wins and which one loses ? Do we somehow
> notify the loser that he has "lost".

  Since the order of process evaluation is unspecified as of now, there
is no need for any further guarantee here. My proposal would be to
execute the requests in any order and react accordingly as currently
defined (e.g. the second of two kills is a no-op).
  The only difference implementation-wise would be, that a queue (or any
other container) of pending requests would be needed. Since these cases
are quite likely modelling errors, we could consider to mark them as
errors as well, though.

  Due to method processes having their kills delayed (see above), we
already might need to handle these corner cases anyhow. Or we would
need have to deal with preempted method processes, which I rather
dislike and may be difficult to implement.

[snip]
>> |-----Original Message-----
>> |From: Bishnupriya Bhattacharya
[snip]

>> |In internal discussions with Stuart and Mac, we concluded that the existing
>> |mechanism whereby the target has the provision of catching
>> |sc_unwind_exception (previously called sc_kill_exception) and performing
>> |any necessary clean-up, is adequate to address the requriement of the
>> |target being able to leave things in a clean state.

Agreed. Two points on this:

Is an implementation obliged to detect, whether an application does not
conform to the "rethrow and no scheduling" requirement? This is
currently not defined in the proposal.

Secondly, due to the obligation to avoid any scheduling during unwinding
caused by a kill or reset, there's the need to detect this case from
within destructors.

  As an example, we have proxy objects doing protocol/synchronisation
stuff during construction and destruction. So there are cases, where
SystemC scheduler calls are done inside a destructor. To avoid this
during unwinding, something like

bool sc_is_unwinding() {
 return sc_get_current_process_handle().is_unwinding();
}

would be needed, since one can't catch the current exception within the
destructor during stack unwinding. With std::uncaught_exception a
similar thing exists in the C++ standard library for related situations.

Thanks,
  Philipp

-- 
Philipp A. Hartmann
Hardware/Software Design Methodology Group
OFFIS
R&D Division Transportation | FuE-Bereich Verkehr
Escherweg 2 · 26121 Oldenburg · Germany
Phone/Fax: +49-441-9722-420/282 · PGP: 0x9161A5C0 · http://www.offis.de/

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Fri Aug 27 14:36:45 2010

This archive was generated by hypermail 2.1.8 : Fri Aug 27 2010 - 14:36:48 PDT