RE: Streaming, batching, concurrent and alternating

From: Shabtay Matalon <shabtay_at_.....>
Date: Wed Jun 22 2005 - 16:01:26 PDT
Sorry guys, I have been on long boring meetings up to now and missed all
the fun. 

I inserted my comments bellow.


>> I proposed using the term concurrent to contrast with alternating as
it
>> clearly distinguishes the two use models.
>
>Right, but those terms refer to the execution model, not the data
transfer
>model.  Streaming requires a concurrent execution model, but a
concurrent
>execution model does not require streaming.
[Shabtay] Agree. I can definitively go for defining concurrent and
alternating as execution model.

Russ,

I agree with you that determinism is important, but I read the emails
thread and saw quite a debate on the scope of determinism that can be
accomplished. Determinism between pure HDL runs on VCS/MTI/IUS does not
even exist as each vendor only managed to achieve high level of
determinism in their space but not among their simulators. Add the
complexity of linking with hardware and HVLs and the challenge is even
higher.

I think that eventually, we as a group will determine what is
acceptable. On the other hand we need to define use models that are
understood by users, transactor modelers and broader community. I thus
advocate that we continue to strongly apply determinism criteria to our
decision process, yet use practical terms such as concurrent and
alternating as execution model. 


>
>> It has also been quite common to use the term streaming even when the
>stream
>> is being stopped.
>
>This is a quibble, but I would say that if a stream stalls it is no
longer
>streaming, at least that is how I think of it when my streaming video
>viewer
>freezes . . .
[Shabtay] The issue with using a term streaming is that it requires a
perfect speed matching between the producer and consumer or unlimited
fifos. In practice, and in particular when the producer and consume of
streaming data is an emulator, you must deal with finite
consumption/generation rate and finite buffers. It also quite possible
the SW side will have limited bandwidth to produce a stream at the right
rate, for example when a constraint solver gets involved.

I would like to learn more from you Russ how you see this free running
streaming working. As least until I understand this better, I would
suggest that we assume that a finite fifo will be instantiated on the
hardware side for streaming data coming in and out that is accompanied
with handshake protocol that manages communication when the fifo fills
or starves. If you as a user can build an environment that the fifos
never fill or starve, this is your gain. 


>
>> Do you both agree that using "concurrent" as the reciprocal to
>alternating
>is
>> a better name?
>
>For the execution model, yes, but not for the data transfer model.
>
>> Batching of massages indeed allows optimizing the transport layer for
>> performance. If we yet want to use the term streaming, I would
suggest
>that
>> we narrow its definition to mean
>
>> a) It runs in concurrent mode only
>> b) It batches messages throughout the session.
>
>Actually, (b) is not required for streaming to occur.  You can send
>messages individually as long as the buffer on the consumer side does
>not dry up.  In fact, when running concurrently, you might say there
>are three separate processes running: the producer, the consumer, and
>the channel.   The per-message overhead of transferring a single
message
>can often be hidden in such a scenario making it less necessary to
>burst or batch messages.
[Shabtay] I agree with this. What I don't see is how in practice we can
claim that a channel that is implemented on HW emulation box cannot dry
up. I'd rather assume that it will dry up as explained above. 
>
>Per
>
>--
>Per Bojsen                                Email: <bojsen@zaiqtech.com>
>Zaiq Technologies, Inc.                   WWW:
http://www.zaiqtech.com
>78 Dragon Ct.                             Tel:   781 721 8229
>Woburn, MA 01801                          Fax:   781 932 7488
Received on Wed Jun 22 16:01:30 2005

This archive was generated by hypermail 2.1.8 : Wed Jun 22 2005 - 16:01:33 PDT