If you don't want to have to pack/unpack and re-arrange data
unnecessarily you could go with a C++ style interface: you pass handles
much the same as
PLI/VPI except that the handles are nominally C++ object pointers with a
set
of virtual functions for extracting/setting data.
It's probably possible to construct the C++ thunks/vtables on the fly in
such a way that you can support multiple C++ compilers, and if you want
to use them from C you can just treat the vtable as an array of
functions (you just need to know the index of the one you want) - e.g.:
(*DPI_handle.vt[DPI_VFN_GET_INT32])(DPI_handle,<call args>)
Which you could probably macro as:
DPI_GET_INT32(DPI_handle,<call args>)
This works better as the size of the data object gets larger and the
cost of rearranging the data gets higher.
Most of the inefficiency in PLI comes from the indirection inherent in
the handle mechanism, and the fact the user code is allowed to hang onto
the handle for future calls.
Kev.
-----Original Message-----
From: owner-sv-cc@eda.org [mailto:owner-sv-cc@eda.org] On Behalf Of
Andrzej I. Litwiniuk
Sent: Monday, October 04, 2004 7:52 PM
To: sv-cc@eda.org
Subject: [sv-cc] representation of packed arrays and binary level
compatibility
Hi all,
To my best knowledge, there is a trade-off between simulator's
flexibility
in chosing the representation of a packed array (what affects
performance)
and the ablity to pass a data structure containing a packed array in a
binary
compatible way.
Any C compiler, in order to determine a size and an alignment of a
structure,
must know the size and alignement of every member of that structure.
So, if a structure contains a member field of type T, say
struct S {
...
T t;
...
} S;
then 'T' must be known in order to determine the size and alignment of
the whole structure 'S' as well as a relative offfset of 't' and
relative
offfsets of the member fields after 't'.
Alignments are usually platform dependent and typically are 1, 2, 4, 8,
16.
Consider the following example:
struct S {
char a;
T b;
} S;
The following table summarizes size/alignment on two platforms,
for 'T' defined as char, short, int or long long.
'T' defined as: char short int long long
-------------------------------------------------------------
Solaris (Sun): sizeof(S) = 2 4 8 16
Linux (i686): sizeof(S) = 2 4 8 12
Solaris (Sun): alignment = 1 2 4 8
Linux (i686): alignment = 1 2 4 4
Therefore it clearly matters what is the actual representation of type
'T'.
A simulator may chose a different representation of a packed array
depending on its size.
For example, 8-bit packed bit array may be represented as a byte (char),
16-bit packed array may be represented as two bytes (short int),
32-bit packed array may be represented as a word (int),
64-bit packed array may be represented as a double word (long long).
Please note that the above applies also to 32-bit platforms. Even if
there is
no built-in 64-bit arithmetic, it is pretty common to have a hardware
support
for reading/writing double words. So it may make sense to represent a
64-bit
packed array as a single double-word value rather than as a sequence of
two
consecutive words; note that a double-word may need 8-byte alignment
while
2-element array of int does not.
On 64-bit platforms a built-in 64-bit arithmetics makes even more sense
to represent a 64-bit packed array as 'long long' rather then as two
'ints'.
On the other hand, if a small packed array is represented as a single
byte
rather than as a word of memory, it may bring substantial savings,
especially in the case of a huge unpacked array of such values.
What a representation is chosen for a packed array may greatly impact
the simulation performance, both in the footprint and in the run-time.
Note that the size of a data structure also affects a simulation time:
the bigger data to be rad/written the greater odds for cache misses.
Hence a simulator may chose to use different representation for
different
sizes of packed arrays.
Once the type for representing a packed array is fixed, all the
flexibility
is gone.
If a basic "chunk" is defined for all simulators as 'int' regardless
of the size of a particular packed array, then neither a more compact
memory saving representation can be used for small arrays nor a 64-bit
representation more efficient for read/write and arithmetic operations
can be used for bigger arrays.
In my opinion it is natural to expect and to require binary
compatibility
only for those SystemVerilog types that have a natural mapping onto C
types
or that can be treated as "black boxes" and handled solely via pointers.
The types that are SystemVerilog specific and may have implementation
dependent represention may require a re-compilation using vendor
provided
definitions. It's not that an application is not binary compatible.
It is a data type that is not binary compatible!
Regards,
Andrzej
========================================================================
======
Andrzej I. Litwiniuk, PhD Principal Engineer VCS R&D
Synopsys, Inc TEL: (508)
263-8056
377 Simarano Drive, Suite 300, FAX: (508)
263-8069
Marlboro, MA 01752, USA
========================================================================
======
Received on Tue Oct 5 10:53:14 2004
This archive was generated by hypermail 2.1.8 : Tue Oct 05 2004 - 10:53:30 PDT