RE: Streaming, batching, concurrent and alternating

From: Shabtay Matalon <shabtay_at_.....>
Date: Wed Jun 22 2005 - 17:51:17 PDT
John,

I'd like to clarify on few areas where you referred to some statements
that I made.

Shabtay

>We're still struggling to see how this is achieved
>with the SceMiMessageVarPorts.
[Shabtay] I am sure that Matt and I will be willing to address any
questions that you have on how determinism is achieved with
SceMiMessageVarPorts. Do you simply need more time or do you have any
specific question in mind?

.
>
>2. The implementation can chose to do some type
>    of internal buffering or some type of double
>    buffering scheme in a stream to only transfer data across
>    the link at optimal points. Originally this is what
>    we thought Cadence meant by batching but Shabtay
>    made it clear (I think) that "batching" is at the user level
>    not the implementation level - so in that sense it
>    fits more with "data shaping" described below.
[Shabtay] John, you remember that I raised the need to a clear
distinction between the SCE-MI infrastructure implementor, the
transactor implementor and the end-user? Here is an example why this is
important.

Batching is implemented by the SCE-MI infrastructure meaning that the
infrastructure contains built in fifos. These are HIDDEN from the
transactor modeler and not exposed by the SCE-MI APIs on either side.
The end-user is the one that calls a function to turn this mode on and
off. 

Russ: A note. There is really no knob here. The default is always
reactive and the user simply calls a function to set batching to ON. So
it is simply an ON/OFF switch and not a rotary knob. 

The above is the most important attribute of the batching proposal as we
SHOULD not expose a specialized API to support batching to the
transactor implementor who frankly shouldn't care if batching is used or
not. To the contrary, the end-user cares both about the level of
performance that he/she obtains and he/she is the one who knows the
reactivity attributes of the test. So the end-user should be provided
with the function calls to set batching ON and OFF for a given channel.

>Additionally our "end-of-message" marking capability
>addresses what I believe to be a consensus on
>the meaning of "variable length messaging". From
>Shabtay's responses, Cadence's interpretation of "VLM"
>appears similar.
[Shabtay] I am not sure the two are the same as you emphasized in your
latest proposal (new section 3.4) to send multiple sequential messages
in 0 time. Why would you need that if you can send any arbitrary size
data by calling the send method once?



>
>--------------------------------
>Batching
>
>This is the ability to accumulate transactions on one
>end of a channel at a different rate than which they
>might be read at the other end (at least this is
>what I understood from Shabtay's e-mails).
[Shabtay] This is correct. Order is preserved within the channel and not
among channels. 
>
>Although this was not part of the agreed upon goals,
>the Mentor proposal refers to this as "data shaping".
>A couple of scenarios were discussed in this thread.
>
>For example, sending a batch of transactions
>from C to HDL all at once but reading them "one at a time"
>from the other end.
>
>This is our "funnel" concept in data shaping as
>described in the Mentor proposal.
[Shabtay] I admit that your responses came so late today that I'll need
to take more time to digest them and review again your proposal. But
I'll  clarify a misconception on the VLM again.

The sender (meaning again the transactor) does not need to take any
specific action to send multiple VLMs in using the "batching mode" (no
more use of the term streaming...). Given that VLM can be of any length,
the producer can simply determine how much data to send it. The end-user
is the one that determines if the VLM will arrive instantly or not. By
default, the VLM will arrive instantly. If the end-user changes the
default to "batching mode", the VLM will be queued. 

Data shaping in your proposal requires the user of transaction pipes as
opposed to using standard DPI calls. Batching by my proposal is not
visible by the transactor through the channel API.

 
Received on Wed Jun 22 17:51:17 2005

This archive was generated by hypermail 2.1.8 : Wed Jun 22 2005 - 17:51:18 PDT