RE: ITC Meeting Minutes for May 26th

From: Russell Vreeland <vreeland_at_.....>
Date: Tue Jun 21 2005 - 10:50:28 PDT
Shabtay,
 
Interesting discussion, 
 
Comments below, in black
---------------------------------------
---    Russ Vreeland (949)926-6143  ---
---    vreeland@broadcom.com        ---
---    Senior Principal Engineer    ---
---    Broadcom Corporation         ---
---------------------------------------

From: owner-itc@eda.org [mailto:owner-itc@eda.org] On Behalf Of Shabtay
Matalon
Sent: Monday, June 20, 2005 5:39 PM
To: vreeland@broadcom.com
Cc: itc@eda.org; Shabtay Matalon
Subject: RE: ITC Meeting Minutes for May 26th



Russ,
 
I am glad that you asked the question as it allows me to explain this
better. 
 
We use some terms differently which may cause some confusion. For example,
you use the term "streaming vs. purely alternating" below by defining
streaming mode as what I call concurrent. I simply contrast concurrent vs.
alternating indicating whether the HW side and the SW side take turns or run
at the same time (aka concurrently). I separate this use model from
"streaming" meaning delivery of multiple transactions in a batch by my
definition. A batch can contain any number of transactions starting from 2
up to a large number or entire set applied by the user. So if we want to
avoid overload the term "streaming", we could use batching instead or we
could all agree that batching and streaming is the same if we separate it
from concurrent/alternating use model. 
 
Per responded to this with the observation that streaming implies
concurrency. I would add that streaming makes no sense at all apart from
concurrency. It is not just delivering multiple transactions in a batch.
However, streaming is not -just- concurrency, streaming is an attempt to
regain some of the performance lost from running in a strictly "alternating"
mode -- while not sacrificing determinism. To do this, it is necessary to
use software threads that handle streaming interfaces (that aren't required
to be "alternating" -- can run while hw is running) while other software
threads are "alternating" in order to preserve determinism. Also, streaming
only makes sense outside the software-only mode of operation, as a
simulation won't run concurrently anyway.
 
There is a "concurrent mode of operation" apart from streaming: that would
be simply running without any attention paid to clock control at all. Hw
never stops, software never blocks ( or polls passively waiting for a
callback ). That would be the comodeling equivalent of a targeted emulation
setup. Without care taken to design "streaming" with threads so that
determinism is preserved, the "free running" concurrent operation mode might
as well be used: there's no such thing as a little bit deterministic.
 
So let's use the term streaming to identify the distinct mode of operation
that is useful -- deterministic operation while achieving the limited
concurrency that is possible. What you call "batching" seems to be either
another way of saying streaming, -OR- it's an implementation detail.
 
I think that you can agree with me that batching can take happen on a per
interface basis in both concurrent and alternating mode. Correct? 
 
Ahh, this is clue that what you call "batching" is an implementation detail
since that's all that it could be in alternating mode. The infrastructure
can be free to batch or not batch, but what do I, the user, care? For the
case where I've  sent multiple input messages (C -> V), before entering
ServiceLoop(), I am content that they all will get there in the order in
which they were sent. For the case where multiple output messages (V->C) are
queued up, I am content that their callbacks will be executed, in some
random order, and the data will move appropriately. (I am speaking in SCEMI
1.1 terms, but the issues are the same under any SCEMI 2.0 proposal.) As far
as <how> these input and output messages are accomplished, the freedom from
having to worry about that is exactly the 'raising of the level of
abstraction' that I think is a good thing. The IFL can concatenate, reorder,
"batch", resize, or whatever as long as the results happen as I described
above.
 
Reactivity control simply describes the ability of the test to determine how
many transactions it wishes to send or receive in a batch (or a stream). Use
mode defines the setup by the end user which interfaces will be Reactive and
which Streaming (maybe we could come up with a better term for use model.). 
 
If you're saying the user should be able to control whether an interface is
"streaming" or not, I agree.
 
If you mean literally that it's a good thing for a user to determine how
many transactions to send or receive in a batch, I am skeptical that that
level of control is necessary or a good thing.
 
 
This setup (that can vary from one verification environment to another)
requires that you create a single transactor that is portable across the two
use models. 
 
Last, I don't see how "providing SCE-MI infrastructure support for
(implicit) reactivity control" can disappear at any level of abstraction.
Maybe now that I defined the terms better, you can respond the point made in
the section. 
 
My view is that a high level of abstraction implies a lack of need for and
implicit encouragement to eschew detailed user control of the interface. The
only exception I can see would be a mechanism to put an interface into true
streaming mode or not. I respectfully suggest your batching description
moves in the opposite direction from increasing ease-of-use and raising
level of abstraction for the user.
 
Thanks,
 
Shabtay   
 
 
 
Russ 
 

 


  _____  


From: Russell Vreeland [mailto:vreeland@broadcom.com] 
Sent: Monday, June 20, 2005 4:18 PM
To: Shabtay Matalon; itc@eda.org
Subject: RE: ITC Meeting Minutes for May 26th

 

Shabtay, 

 

In your comments entitled "Issues impeding on transactor portability and
common use model"  you have a lengthy section "Provide SCE-MI infrastructure
support for (implicit) reactivity control". This is obviously something of
concern to you, but I have to tell you, I don't see where this capability --
"to deliver SCE-MI messages in batches under the control of the end user" --
fits under the goals and the optimal use model (as I undertand it) of the
SCEMI spec, past or future. Or, if this is just another way of describing
streaming, then it's not a separate topic.

 

Could you explain how this capability is independent of or adds anything to
any of the other categorizations of message types that the committee has
discussed already (streaming vs. purely alternating, raising the level of
abstraction to the user, etc.)? I believe 'reactivity control' is something
that either ought not be provided or 'disappears' when looking at SCEMI at a
higher level of abstraction, but I'd like to here your answer in case I've
missed something.

Russ

---------------------------------------
---    Russ Vreeland (949)926-6143  ---
---    vreeland@broadcom.com        ---
---    Senior Principal Engineer    ---
---    Broadcom Corporation         ---
---------------------------------------

-----Original Message-----
From: owner-itc@eda.org [mailto:owner-itc@eda.org] On Behalf Of Shabtay
Matalon
Sent: Friday, June 17, 2005 3:13 PM
To: John Stickley; Joseph BULONE; itc@eda.org
Subject: RE: ITC Meeting Minutes for May 26th

Joseph, John,

 

There are few comments that I wanted to add to Matt's email. See my comments
below.

 

I'd appreciate your response.

 

Thanks,

 

Shabtay

 

>-----Original Message-----

>From: owner-itc@eda.org [mailto:owner-itc@eda.org] On Behalf Of John

>Stickley

>Sent: Wednesday, June 15, 2005 12:47 PM

>To: Joseph BULONE; 'itc@eda.org'

>Subject: Re: ITC Meeting Minutes for May 26th

> 

>Joseph,

> 

>I've taken a stab at answering some of your queries at least from the point

>of view of the Mentor proposal.

> 

>Joseph BULONE wrote:

>> Thanks a lot for the minutes of the ITC meeting I cannot unfortunately

>> attend as often I would like. It is very helpful to keep track of all

>> the work and effort performed especially by the EDA vendors.

>> 

>> Here are the points, which appear as essential and critical for ST in

>> order to choose the "best" evolution to SCEMI 1.x :

>> 

>> - We need at least to be able to reuse all the amount of work

>> currently done around SCEMI 1.x, not to loose our investment we made

>> in SCEMI since our involvement in SCEMI (i.e. since the setup of this

>> standard). The following things are thus requested:

>> 

>> * 1.x transactors are to be reusable (same syntax and semantics

>> including the clock control mechanism) in concurrency with 2.x

>> transactors: the beneficial impact of a modeling simplification must

>> not kill the previous modeling effort.

> 

>johnS:

>This has been a primary requirement from the outset. It has clearly been

>part of our agreed upon goals and is also addressed by both proposals.

 

 [Shabtay] John, I agree that both proposal address existing SCE-MI 1.1
models and even in quite a similar way.

 

Joseph,

 

Is your question only related to using the SCE-MI 1.1 transactors or also to
reusing the existing SCE-MI 1.1 modeling style and code? In this regards the
two proposals are different as the Cadence proposal keeps the existing
SCE-MI 1.1 style and the Mentor one changes it quite a bit. 

 

How critical it is for you to maintain a similarity between SCE-MI 1.1 and
2.0 vs. moving to an entirely new API (such as DPI)?   

> 

>> 

>> * Our TLM/SC verification environment link to SCEMI needs to be

>> preserved by simply preserving a C++/C interface: eg. As 'Direct

>> Programming Interface

>> (DPI) enables it to "call" C/C++/SystemC functions, and vice versa',

>> then there should be a mean to define something completely independent

>> from DPI, and if necessary we should standardize the integration with

>> DPI providing in a way the specification of a kind of DPI linker tool.

>> In this case the DPI interface could be attached only to new

>> transactors, and for these transactors, the types definitions could be

>imposed to be coherent with DPI.

> 

>johnS:

>If I understand your meaning here correctly, you would like to see a

>preservation of SystemC models that use TLM-API interfaces is that

>correct ?

> 

>If I further understand you've created some sort of conduits between TLM

>and SCE-MI 1.1 that these models can use.

> 

>We've done some similar work investigating TLM-DPI conduits. My guess is

>that this can preserve use of models that use such interfaces with no

>modification - at least that's been the case with some of the models we've

>worked with.

> 

>These can coexist with TLM-SCEMI conduits and may want to consider

>standardizing both sets of interface extensions.

> 

>Furthermore, you may find that TLM-DPI conduits can be done more simply

>than what is required for SCE-MI 1.1 applications especially with the

>avoidance of the need for callbacks and dealing with uncontrolled time

[Shabtay] John, I assume that a conduit as you proposed is what I have
suggested as "HVL encapsulation layer" in the paper I sent out (pointed by
Matt's proposal - slide 11). This can be a layer that bridges between the
SCE-MI API (language neutral) and a language specific API such as SystemC.
Using blocking calls requires using some sort of a threading package and I
don't yet understand how you propose to support a blocking call unless you
assume in which threading environment you operate. The idea of an
encapsulation layer is that it does contain the HVL specific threading
environment. Please clarify if this is what you meant.

 

Based on my experience, there is also a need for proxy model in between a
TLM and SCE-MI API or between a TLM and the HVL encapsulation layer. As a
minimum, it needs to configure the BFM on the HW side. Are you stating that
a generic layer can be a replacement for a proxy model of just an add-on?

 

We are not promoting use of controlled clock either in our proposal, but can
you explain what is the difficulty of creating blocking and non blocking
interfaces for the existing SCE-MI SW API? We have easily implemented such
an interface for SystemC based on the existing SCE-MI callbacks.

> 

>> 

>> - Others:

>> * Positive impact on performance: The concept (whatever the final

>> implementation) of data shaping appears as interesting for streaming

>> (and variable length messaging) performance optimization (depending on

>> actual fifo sizes and synchronization mechanism).

> 

>johnS:

>Yes, we agree that DPI pipes can have a positive impact on performance by

>facilitating optimized implementation that allows overlapped execution

>between C/C++/SystemC code running on the workstation and the H/W

>simulation.

[Shabtay] We have proposed that SCE-MI defines a clear use model assumption
between alternating and concurrent modes to avoid portability issues from
simulation to acceleration and from one system to another. Users who wish to
use concurrent use model will have means to explicitly state it. See issue 4
in the paper I submitted to the group. 

 

John, Joseph, 

 

Do you agree with that?

 

 
Received on Tue Jun 21 10:50:55 2005

This archive was generated by hypermail 2.1.8 : Tue Jun 21 2005 - 10:51:06 PDT