Hi Shabtay, > We think that synchronization should take place implicitly on > transaction boundaries in what we define as the reactive (default) mode. > Reactive mode insures correct bring up of the environment w/o any > pipe/buffering latencies. By `transaction' I assume you mean units of data that are bounded by the eom flag? So, the pipes proposal states that pipes are fully deterministic. This means that no matter what goes on under the hood including how synchronizations occur, a simulation will always produce the same results (of course, assuming the user code is not introducing non-determinism of its own, e.g., using pseudo-random processes). So requiring an implicit flush/synchronization on transaction/eom boundaries will not change the behavior of the simulation. Hence there is no need for this requirement. Since you're bringing up the distinction between synchronizing on eom/transaction boundaries and not (what you call streaming) I have to assume you believe simulations could behave differently in the two modes. In other words, pipes are *not* deterministic. Is it true that this is your impression? Note, the eom flag can cause synchronizations to happen already. For input pipes, for instance, the consumer will block until enough data to satisfy its request becomes available. Blocking on the HDL side implies a context switch to the C side, i.e., a synchronization. The request would be satisfied as well if the last part of the transaction was received, i.e., when the eom flag is set even if the transfer is not the full number of blocks asked for (can you confirm this, John?). So even if the C side did not flush, the hardware side would wake up when it received the last bits of the message. So eom can in a sense create transaction boundary synchronizations. > [...] While it is a benefit that Flush > could be called, we don't thing that BFMs and proxy models should be > required to call Flush() for correct synchronization semantics to occur. Flushing is not required for correct synchronization semantics to occur, that is, flushing is not required to achieve deterministic behavior. > The means to support streaming is to modify the synchronization > semantics on a per pipe basis to streaming. This can be handled under the hood, or in other words, it is in the implementation domain and implementations are free to provide knobs for the user to tweak that could adjust the synchronziation strategies and internal buffering. > Implicit NOOP is an important attribute of the interface. I am worried that implicit noop can introduce non-determinism. Do you have any thoughts on this? > Output BFMs should not be required to know the size of the largest > transaction they deliver neither consider issues such as the maximum > buffer size designated by the implementation on the HW side. Actually, I think it is quiet reasonable for a BFM designer to know the largest transaction size they will handle, assuming they are not streaming where the transaction size is essentially infinite. I can't think of any protocol that does not have a bounded transaction size. On the other hand, I agree that the BFM/BFM designer should not have to know the size of the buffer contained in the SCE-MI infrastructure. The pipes proposal does not imply this. > Assume that the implementation has decided to allocate buffer whose > depth is 64 bytes on an output pipe. The transaction delivered is 151 > bytes long. Assume the consumer wishes to read full transactions. It is > up to the implementation to manage the transport such that the consumer > will read the entire 151 bytes long transaction independently of the > length of the buffer. Neither the producer BFM nor the consumer Proxy > should be aware how much buffer was designated in the HW. The producer > should have the liberty to deliver transitions of any size as produced > by the interface. I think the pipes proposal handles this just fine. Did you find this to be a hole in the proposal? On the topic of global flush, I don't think it is needed at the end of the test since the pipes should empty by virtue of the eom flag. As for the middle of the test, you're thinking of a scenario where all pipes are being flushed and simulation time is allowed to pass until all pipes have flushed. Any pipe that flushed early would be in implicit noop mode, I assume. Is that right? So your global flush in the middle of the test relies on implicit noop, or do you see it making sense even without implicit noop? Per -- Per Bojsen Email: <bojsen@zaiqtech.com> Zaiq Technologies, Inc. WWW: http://www.zaiqtech.com 78 Dragon Ct. Tel: 781 721 8229 Woburn, MA 01801 Fax: 781 932 7488Received on Wed Feb 8 17:21:34 2006
This archive was generated by hypermail 2.1.8 : Wed Feb 08 2006 - 17:21:44 PST