Re: DPI and transaction pipes

From: John Stickley <John_Stickley_at_.....>
Date: Tue Dec 20 2005 - 18:48:09 PST
Shabtay,

Responses embedded.

Shabtay Matalon wrote:
> Hi John and ITC techies,
> 
>  
> 
> I have few questions that popped up while reviewing the latest draft 1.7 
> of SCE-MI 2 Working Document. Could you address these questions?
> 
>  
> 
> 1. I’ll start with a higher level question. In section 2.1 you indicate 
> that “The (DPI) function call is the transaction”. We have spent quite 
> some time defining the DPI subset that should be used for transaction 
> passing. Then, later in section 4, the concept of transaction pipes is 
> introduced and the example in Appendix C illustrates how transactions 
> are conveyed using pipes. DPI per se is very lightly used in this 
> example only to communicate Reset Complete. Exported functions are not 
> even used. By this I can deduct that DPI is mainly used for providing 
> means for HW and SW side to conveying some vector level information to 
> each other, but that the transaction is not the argument of the DPI 
> imported/exported function but rather the argument of the input and 
> output pipe.
> 
>  
> 
> Is my assumption correct so far? If not, why not?

johnS:
No not really. These examples emphasis the use of pipes and the streaming
use model. But we would do well to have more examples that show simple
import/export calls for more reactive type applications. Both are valid
use models.

> 
>  
> 
> I am trying to conceive what simulation users who care will do when 
> building simulation models when learning the SCE-MI 2.0 supports DPI. I 
> would expect them to use DPI for transaction level communication (and 
> hopefully the quite reasonable DPI subset that we endorse for 
> acceleration). But what will happen when using acceleration? Are the 
> users expected to modify the interface of their transactors to using 
> pipes to get the added performance? Or should we enforce using pipes for 
> transactions with limited role for using pure DPI?

johnS:
No I don't think so. I think pipes have their use specifically
when streaming and/or data shaping and/or VLM capability is
needed but will use plain DPI for more reactive use models.

Implementations can optimize either use model but generally
for plain DPI calls, synchronizations (between caller and
callee) are implied on each call. Whereas for pipes, there
is no callee - just a pair callers - one for each endpoint
- which makes things easier for the user to create streaming
applications but which requires special care in the implementation
to insure synchronization is properly done.

> 
>  
> 
> 2. In sections 4.2 and 4.3 you use the following style for declaring the 
> pipes:
> 
>  
> 
>  “DPI-C” context task dpi_pipe_hdl_receive/send
> 
>  
> 
> I am wondering why you are using DPI semantics when declaring the pipes 
> or even calling these dpi pipes. For example, assume that the HDL wants 
> to call dpi_pipe_hdl_send (some arguments). If dpi_pipe_hdl_send is a 
> DPI import task, it should follow the DPI semantics and yield control 
> immediately to the C side. But by my understanding, you do not want the 
> task to yield control to the SW side until the HDL side issues a flush. 
> Isn’t this contradictory to calling DPI task?


johnS:
A couple of points here.

1. One way of implementing pipes is directly as DPI calls. But that
    is not required. Alternatively they can be Verilog functions that
    do something underneath to implement the pipes.

2. The flush simply dictates that a synchronization must occur
    at least then. But it is totally up to the implementation to decide
    when data is actually transferred. It could be every time data
    is written. Or it could be buffered. So long as the same behavior
    is observed at the endpoints by the application. Again, just like
    Unix named pipes.

    Also, even if a pipe uses a direct DPI call to implement it, that does
    not necessarily mean the opposite endpoint sees the data right
    away. DPI calls can implement a scheme of buffering internally.

> 
>  
> 
> On the same line of thought, what prevents the dpi_pipe_hdl_receive/send 
> from simply being a Verilog 2001 task?

johnS:
Nothing. This is perfectly valid. As long as the behavior of the call
to the callers on each pipe endpoint follows the specified behavior.

> The implementation is hidden from 
> the user in the included file and this it is up to the implementation if 
> it uses DPI, VPI, PLI, SCE-MI 1 or any underlying transport mechanism. 
> Is this correct?

johnS:
Yes.

> 
>  
> 
> 3. The spec doesn’t explicitly define the HW/SW yield control semantics 
> when using pipes. Here is want we understand so far:
> 
>  
> 
> Pipe is empty and receive() is called - Yield to producer
> 
> Pipe is full and send() is called - Yield to consumer
> 
> Pipe is not empty and flush() is called - Yield to consumer
> 
>  
> 
> Is this correct? Shouldn’t the spec define explicitly the above semantics?

johnS:
Yes this is basically correct. We can add some more descriptive text
here.

-- johnS


> 
>  
> 
> Thanks,
> 
>  
> 
> Shabtay
> 
>  
> 
>  
> 
> -------------------------------------
> 
>  
> 
> Shabtay Matalon
> 
> Solution Architect
> 
> R&D, CVA
> 
> Phone: (408) 428 5081
> 
>  
> 
>  
> 

-- 

This email may contain material that is confidential, privileged
and/or attorney work product for the sole use of the intended
recipient.  Any review, reliance or distribution by others or
forwarding without express permission        /\
is strictly prohibited. If you are     /\   |  \
not the intended recipient please     |  \ /   |
contact the sender and delete        /    \     \
all copies.                      /\_/  K2  \_    \_
______________________________/\/            \     \
John Stickley                   \             \     \
Mgr., Acceleration Methodologies \             \________________
Mentor Graphics - MED             \_
17 E. Cedar Place                   \   john_stickley@mentor.com
Ramsey, NJ  07446                    \     Phone: (201) 818-2585
________________________________________________________________
Received on Tue Dec 20 18:50:42 2005

This archive was generated by hypermail 2.1.8 : Tue Dec 20 2005 - 18:51:26 PST