I am working on resolving errata 51 which was assigned to BC. Jim Vellenga 
reported some
conflicts in the LRM between the language definition and the DPI section.
First my question is why do we map input function arguments of type byte 
unsigned or shortint unsigned to unsigned char C type and unsigned short C 
type?
Why not mapping them to svBitVec32 is that for capacity reason to use the 
smallest C type available?
Secondly the C language does not specify the exact width of char, short, 
int or long. For example, int may be represented in 16 or 32 bits, long 
should be at least 32 bits. char is usually 8 bits but on a few computers, 
it may be 9 or 7 bits. How do we deal with this flexibility of the C 
language and ANSI-C specification ? Our table in section E.6.4 is assuming 
typical C implementations of int implemented with 32 bits and char with 8 
bits, it seems that this mapping may not always work.
Francoise
        '
Received on Mon Sep 20 10:48:03 2004
This archive was generated by hypermail 2.1.8 : Mon Sep 20 2004 - 10:48:07 PDT