semantics of multiple calls to imported DPI functions or pipes from different processes

From: John Stickley <john_stickley_at_.....>
Date: Tue Oct 24 2006 - 10:17:03 PDT
Greetings ITC Techies,

As per my AI from last meeting here's an attempt to add
some wording about caveats when making calls to imported
DPI functions or pipes from multiple processes or
always blocks.

One point I'd like to make is that I think this wording
belongs somewhere outside the pipes section of the doc
since this issue is general to pipes or imported DPI
calls.

With that said, here's the proposed wording:

------------------ begin ------------------------

Care needs to be taken in application HDL code when making
calls to imported DPI functions or pipes from multiple
HDL processes.

If more than one HDL process makes a call to the same
imported DPI function or pipe, the following semantics
hold:

1. All multiple calls will be honored in some order,
    and none will be ignored.
2. If multiple calls are made in the same simulation
    cycle the order in which they are handled is
    undefined and will be implementation dependent.

What #1 and #2 above collectively mean is that data from
all those calls will, without a doubt, pass to the C side.
But the order in which it arrives is undefined and will be
implementation dependent.

For example, suppose two processes p1 and p2 make a
call to an imported DPI function f() that takes
an input int argument in the same simulation cycle:

     import "DPI-C" context void function f( input int a );

Suppose further that p1 passes the value 1 to f()
and p2 passes the value 2 to f():

     always @( posedge clock ) begin: p1
         f( 1 );
     end

     always @( posedge clock ) begin: p2
         f( 2 );
     end

By #1 above, f() will be called twice on the C side
on each clock cycle.

By #2 above, it is undefined as to whether the C-side
imported DPI function f() will first see a value
of 1 or a value of 2 on any given clock posedge.

A similar argument can be made for passing data
to output pipes or reading data from input
pipes. The order the data will be queued into
an output pipe will be undefined. And for input pipes,
which process consumes the next data from the pipe
is undefined.

It is up to the application to write code in such
a way that proper operation of the simulation does
not depend on such ordering.

There are two common techniques for doing this:

1. Make sure that p1 and p2 make their calls at
    different simulation times. In which case the
    ordering of the calls will now become predictable
    and consistent among all implementations, for example:

    always @( posedge clock ) begin: p1
        f( 1 );
    end

    always @( negedge clock ) begin: p2
        f( 2 );
    end

    In this case, the sequence arrival of the values,
    1 and 2 will alternate among rising and falling edges
    of the clock and will therefore be predictable
    in all implementations.

2. If it is possible for both calls to be made in
    the same cycle, have a mutual exclusion condition
    that only allows one. For example one might
    have a bit variable that both processes evaluate
    before making their call. If that variable has
    the value 0, then p1 is allowed to make the call.
    If it has the value 1 then p2 is allowed to make the
    call.

    reg p1;
    initial enable_p1 <= 0;

    always @( posedge clock ) begin: selector
        enable_p1 <= ~enable_p1;

    always @( posedge clock ) begin: p1
        if( enable_p1 ) f( 1 );
    end

    always @( posedge clock ) begin: p2
        if( !enable_p1 ) f( 2 );
    end

    In this case, the sequence of arrival of the values,
    1 and 2 on the C side will be predictable
    in all implementations.

-- johnS

______________________________/\/            \     \
John Stickley                   \             \     \
Mgr., Acceleration Methodologies \             \________________
Mentor Graphics - MED             \_
________________________________________________________________
Received on Tue Oct 24 10:17:08 2006

This archive was generated by hypermail 2.1.8 : Tue Oct 24 2006 - 10:17:17 PDT