Known Errors and Ambiguities in IEEE Std 1666-2005 ================================================== [JA] 9-March-2010 This list includes errors and ambiguities in the standard document and in the underlying intent, i.e. both documentation and language/specification problems. This list does include a couple of items that may turn out to highlight errors in the OSCI implementation rather than errors in the IEEE standard itself, in which case we should eventually redirect these points to OSCI, but they can stay in the list for the time being. This list excludes suggestions for adding new features except where they relate directly to solving known language problems or fixing inconsistencies Originators ----------- [AF] Alan Fitch, Doulos [AG] Andy Goodrich, Forte [BB] Bishnupriya Bhattacharya, Cadence Design Systems [DB] David Black, xtreme-eda [DG] Don Grimes, Forte [DL] David Long, Doulos [DVC] David Van Campenhout [GM] Grant Martin and Ric Howard, Tensilica [JA] John Aynsley, Doulos [LWG] OSCI SystemC Language Working Group [MV] Maurizio Vitali, Polymath [SS] Stuart Swan, Cadence Design Systems [VV] Vincent Viteau (was with Summit) ------------------------------------------------------------------------------- [JA] 3.3.3.4 on the lifetimes of transient objects should include the operator& and operator| that return a reference to and sc_event_and_list and sc_event_or_list. The descriptions of these operators should remark on the lifetimes of the objects. [JA] The LRM does not explicitly say how get_child_objects() should behave for the objects associated with process instances when new processes are spawned or terminated. In particular, the LRM does not say exactly when the associated sc_object is created or deleted. The intent is that this is implementation-defined. But the LRM does explicitly say that process handles should be used in preference to raw pointers. See - 3.3.3.4 Container valid only until sc_object is deleted. - 5.6.1 Process handles should be used in preference to pointers to sc_objects - 5.6.5 get_child_objects() for sc_object associated with process has identical behavior to get_child_objects() for valid process handle. - 5.6.5 When a process is terminated, the process handle continues to exist but may or may not be valid. - 5.15.1 sc_spawn creates a child process instance [DB] 4. If elaboration fails with a warning, should simulation start? More interestingly, should end_of_simulation be called? I would think no, but a customer is seeing this in the OSCI 2.2. implementation. The message we get is "Warning: (W509) module construction not properly completed...". This warning should be reclassified as an error I would think. Is this a bug or an undefined feature? [MV] 4.1.1 "Instances of class sc_module and class sc_prim_channel may only be created within a module or within function sc_main. Instances of class sc_port and class sc_export can only be created within a module. It shall be an error to instantiate a module or primitive channel other than within a module or within function sc_main, or to instantiate a port or export other than within a module." 1) what exactly "created within a function" means is not defined anywhere. The nearest is the definition of call (3.1.3) and since there it is stated that call means directly or indirectly we can use it as a guideline and interpret the paragraph quoted above as meaning that sc_modules and sc_prim_channels can be created while executing sc_main or any function called by it. Does this match what you had in mind? 2) the paragraph above rules out global objects of class sc_module. Was that intentional? My implementation has no problems with it, OSCI fails on an assertion (sc_process_h:478). The only reason I can see for disallowing this is that implementation must be careful to have all resources needed to register processes up and running when the constructor for the global sc_module object is called. And since you have no control on this, some creativity is required. [JA] 4.2.1.2, note 4. This note is correct, but highlights an issue in the definition of dynamic sensitivity. The LRM is explicit that sensitive << creates static sensitivity, wheras wait and next_trigger create dynamic sensitivity. But the definition of wait() and next_trigger() is a little ambiguous, since by definition these methods create dynamic sensitivity, yet with empty argument lists they set the static sensitivity. Moreover, in the OSCI simulator, in the specific case of immediate notification of an event, wait() behaves like static sensitivity, and wait(e) behaves like static sensitivity if the process is statically sensitive to sc_event e. For example: SC_THREAD(T); sensitive << e; void T() { e.notify(); wait(e); // PROCESS IS RESUMED - EFFECTIVELY STATICALLY SENSITIVE TO e } [JA] 4.3.4.1 The LRM says nothing about the namespace of sc_main. The implicit assumption is that it should be declared in the global namespace, the same as the C++ main function. [BB] 4.3.4.2 IEEE 1666 defines sc_start(0) to execute ONE delta cycle (Section 4.3.4.2). The refsim implementation has always implemented sc_start(0) to execute ALL delta cycles at the current time - i.e. sc_start(0) is the equivalent of the deprecated sc_cycle(0). Excerpts from 1666 and systemc-2.1v1 RELEASENOTES are given below, along with a simple program and its output obtained from Systemc-2.2.05jun06_beta release. If the sc_start(0) call is replaced with sc_cycle(0) in the test, the same output is obtained with additional deprecation messages for sc_cycle. Which definition is right in this case? Whichever one is, we have a hole in expressing the other one, and it seems users may legitimately want to do both in their designs. [JA] 4.4.1 The LRM does not explicitly say that set_sensitivity() must use an event finder before the end of elaboration, although 4.4.1 says you should not make IMCS in before_end_of_elaboration(). Also see 5.11.8.2 [BB,LWG] 4.5 Add the global function bool sc_pending_activity_at_current_time() const; that returns true if either the number of runnable processes, the number of update requests or the number of pending delta notifications is greater than zero. There are no specific restrictions on where this function may be called from, i.e. may be called from a running process or from sc_start. This function was added to v2.2 just before the final release. [JA] 4.5.2 It is unclear whether sc_set_stop_mode() may be called during simulation. The LRM says nothing. The implementation reports a warning if it is called during simulation. [AF] 4.5.2 The LRM does not specify the default stop mode. [JA] 4.5.4 is ambiguous about exactly when during the delta cycle the delta count is incremented - before, during or after the evaluation phase or the update phase? 4.2.2 says a delta cycle = eval + update + delta notif phases and that the initialialization phase is not a delta cycle. One could infer that sc_delta_count() should be 0 during initialization, but it is not clear whether it should return 0 or 1 when called during the first delta cycle following initialization. [JA] 4.5.5 is ambiguous about the value of sc_is_running() when called from sc_main between or after calls to sc_start() [MV] 5.2.9 The LRM does not make it clear whether or not a trailing semicolon is required or permitted after the invocation of macros SC_METHOD, SC_THREAD, SC_HAS_PROCESS etc, although a trailing semicolon is shown in every example. [DVC] 5.2.12 There is no sc_signal. If clocking can be specified via an sc_signal or sc_signal, the reset should be specifiable in the same terms. A counter-argument is that sc_clock is derived from sc_signal, so using for resets is consistent. [BB] 5.2.12 The LRM specifies 2 signatures for reset_signal_is void reset_signal_is( const sc_in& , bool ); void reset_signal_is( const sc_signal& , bool ); The 4feb06 kit implements the following 2 signatures void reset_signal_is( const sc_in& port, bool level ); void reset_signal_is( const sc_signal_in_if& iface, bool level ); <---------- this is a superset of what the LRM says Should the LRM be modified (in the future) to allow the superset that the 4feb06 kit allows? [BB] 5.2.12 IEEE 1666 specifies reset_signal_is for a SC_CTHREAD. It does not explicitly say that reset_signal_is can only be specified once for a cthread process. BB and AG concluded that reset_signal_is shall be specified no more than once, and the LRM should be clarified to this effect. [MV] 5.2.16 It would probably be good to clarify the part about "returning from a method without setting a dynamic sensitivity is equivalent to calling next_trigger() and results in the static sensitivity be active". I don't think I've seen anything anywhere, but soon I'll have to go over the document line-by-line again and I may find it. [VV] 5.2.17 The LRM does not place any restrictions on the arguments to wait(), next_trigger(), and notify() to say that they cannot be negative. 4.2 says that simulation time increases monotonically. 5.10.1 says that sc_time is internally unsigned. The LRM should forbid negative time arguments to wait(), next_trigger() and notify(). [BB] 5.5.6 "I was going through Section 5.5.6 in the 22 September LRM titled "sc_spawn" and noticed an error in the example. Its not really an error that the compiler will catch but will mostly certainly cause a crash at run time. The integer "ret" is a local variable in function T and the address of ret is being passed to sc_spawn as the return value storage location. T() just spawns a bunch of processes and returns, so by the time the spawned processes run and try to store the return value in the address of ret, that memory location has become garbage. The "int ret" declaration should move up and become a member field of MyMod or ret should be newed from the heap. In fact, the paragraph that talks about return values should caution the user of this common mistake "If present, the argument of type T::result_type* shall pass a pointer to a memory location that shall receive the value returned from the function associated with the process instance. In this case, the argument of type T shall be a function object of a class that exposes a nested type named result_type. Furthermore, operator() of the function object shall have the return type result_type. See the example below." The memory location must be kept valid until the spawned function completes, at which point the returned value will be stored in that memory location. "I agree that its not worth a re-ballot, however it is important to do document it as an errata. This is the single most common mistake in our experience that people run up against in this scenario - I've myself done it countless times." [JA] 5.5.7 Although we agreed the wording describing SC_FORK and SC_JOIN, the sentence "There shall be no other characters other than white space separating SC_FORK, the function calls, the commas, and SC_JOIN." remains somewhat ambiguous. The agreed intent was that an implementation is required to support only naked calls to function sc_spawn. In particular, it is not crystal clear whether the following is legal: sc_process_handle h1, h2; SC_FORK h1 = sc_spawn(...), h2 = sc_spawn(...) SC_JOIN Is the expression h1 = sc_spawn(...) to be regarded as a function call in this context? The intent was that an implementation is not required to support this, but it works fine in the ref sim. [JA] 5.6.5 Note that for a process spawned from another spawned process, get_parent_object could return a null pointer if the parent process has already terminated [JA] 5.6.5 The LRM says that the name() method should return an empty string for an invalid process handle. It is not totally clear whether this means a null pointer or a pointer to a null string. [JA] 5.6.6 says that in the before_end_of_elaboration or end_of_elaboration callbacks, sc_get_current_process_handle may return either a handle to the most recently created process or an invalid handle. I think this statement was written BEFORE we decided to explicitly allow process macros in callbacks? I do not see why sc_get_current_process_handle should not return the proper handle if a process is created within the callback. [BB] 5.6.6 The LRM should say that when called from sc_main, sc_get_current_process_handle shall return an invalid handle. [DL] 5.10.1 states that time shall be represented internally by an unsigned integer of at least 64 bits. However, it does not provide any method to create an sc_time object that corresponds to the largest representable time (the sc_time constructor that takes a numeric value has double sc_time_unit arguments). One way to represent the maximum time would be to declare a "constant" sc_time object (e.g. SC_MAX_TIME) that took acount of the global time resolution that applies to the current simulation. [GM, DL, JA] 5.11 Within the sc_module::before_end_of_elaboration() callback there is no way to reliably determine whether or not a port has been bound by the user. An implementation is permitted to execute the port binding at any time from the call the the port binding operator through to immediately before the call to end_of_elaboration. It would be useful to be able to determine whether or not a port has been bound in order to perform further elaboration, e.g. to tie off unbound ports. It is interesting to note that the SystemC LRM has the following statement in section 5.11.4, which suggests this approach: "... it is convenient to bind an otherwise unbound port from the before_end_of_elaboration() callback of the port instance itself" A problem with this approach arises with delay bound ports. Notwithstanding section 5.11.4 cited above, in section 5.11.8.1, the SystemC LRM allows the value returned by the sc_port::size() method to be implementation-defined during the before_end_of_elaboration() callback (in fact, the OSCI proof of concept implementation will return 0 at this time for a delay-bound port). The full statement in 5.11.8.1 says "Member function size shall return the number of channel instances to which the port instance for which it is called has been bound. If member function size is called during elaboration and before the callback end_of_elaboration, the value returned is implementation-defined because the time at which port binding is completed is implementation defined. NOTE: The value returned by size will be 1 for a typical port but may be 0 if the port is unbound or greater than 1 for a multiport." It would be nice to think that we could do our binding of unused interfaces in the end_of_elaboration callback, but in 4.4.2 under end_of_elaboration, the LRM says "The following constructs shall not be used directly or indirectly within callback end_of_elaboration: a) The instantiation of objects of class sc_module, sc_port, sc_export, sc_prim_channel b) Port binding c) Export binding" So we are in a bit of a Catch 22 on this. Of course, we could require the user to specify to the processor module constructor which ports he will be binding to; however, this is not ideal from a user convenience point-of-view. (in fact, given the number of potential interfaces, it is practically impossible). We have an inelegant workaround to this problem, but providing a new introspection routine - eg sc_port::eventual_size() or something like that, would be helpful. It could be defined as: -Member function eventual_size shall return the number of channel instances to which the port instance for which it is called has been bound, or will eventually be bound (due to delayed binding), as is known to SystemC at the time the member function is called.. If member function eventual_size is called during elaboration and before the callback end_of_elaboration, the value returned will be the total of the number of channel instances to which the port instance for which it is called has been bound, plus the number of channel instances to which the port instance will be bound due to delayed binding, as far as have been registered up to this point. [SS] - Want to be able to access interfaces that have already been bound in before_end_of_elaboration, so that you can call functions to retrieve event references, tie off unbound ports, query number of interfaces bound so far, etc. The current LRM says does not guarantee binding complete in before_end_of_elaboration(), but port access in before_end_of_elaboration could be useful. - Want order of interfaces that are bound to be consistent across all implementations. And want to remove current rule in LRM that states that duplicate interfaces are not allowed to be bound to a multiport. These two goals are somewhat in conflict, however, since ports can still be bound in before_end_of_elab. If ports can be accessed via operator[] in before_end_of_elab, then it becomes very difficult/impossible to ensure that interfaces are maintained at the same index after elaboration is completed (via complete_binding). Solution: sc_port::operator[] and sc_port::size() should be explictly disallowed to be used prior to end_of_elaboration. The order of interfaces returned by operator[] should be specified in LRM to match current refsim behavior, and duplicate bindings should now be allowed. Also, a new interface iterator should be added to sc_port. Something like interface_iterator sc_port::interface_interator(). This iterator IS usable in before_end_of_elab, and provides unordered access to all interfaces that can possibly be seen at that stage. (JA) My initial thought is that we agreed a multiport should be regarded as a set, not an ordered list, and hence should be unordered and with no duplicates. Use array ports if you want an ordered list. [MV] 5.11.2 The LRM shows: template<...int N...> class sc_port ... operator() ( sc_port& ); The two N's should be independent, i.e. the bind methods should be template void operator() ( sc_port& ); NOTE M != N [JA] 5.15.1 The LRM does not define what kind() should return for the implementation-defined classes of process objects. [JA] 5.15.4 The second assertion below fails. sc_start() creates two top-level processes named "method_p_0" and "thread_p_0". This contradicts IEEE 1666 5.15.4 "An implementation shall not add any names to this namespace other than the hierarchical names of sc_objects explicitly constructed by an application." #include int sc_main(int argc, char* argv[]) { assert(sc_get_top_level_objects().size() == 0); sc_start(); assert(sc_get_top_level_objects().size() == 0); return 0; } This is an artifact of how the runnable queues are maintained. The head of the queue in both cases is a process that is never executed, but because there is always a head there is no need to check pointers when adding processes to the queue. The two head processes used to be invisible because we did not register processes whose name was a null pointer, or whose first character was null. But at some point it was decided to place every process in the object registry so they now appear. [AG] The way we fixed this in our simulator is by adding another parameter to the sc_object named ctor sc_object(const char* nm, bool register_if_nm_zero = true); Implementation objects (like the processes you describe below) invoke the sc_object ctor with arg values of "0" and "true" respectively. Recall, that we had a long debate in the LWG on whether objects created via sc_object default ctor and the named ctor with a 0 argument should be registered or not - the refism did not use to register these objects; it was decided that the LRM would mandate all objects be registered. I had raised the issue of implementation objects that want to stay hidden, and the consensus was implementations were free to figure out a way to hide these objects. I believe the above implementation is a superset of what the LRM dictates, and hence is still LRM-compliant. [BB] [AG] 5.15.7 The LRM does not say what sc_object::get_parent_object() should return if the parent object has been deleted. This can happen if the parent is a spawned process. sc_process_handle::get_parent_object would return a null pointer in this case. [JA] 6.3.2 The LRM definition of class sc_signal_write_if is incorrect. It should be: class sc_signal_write_if: virtual public sc_interface { ... [BB] 6.4.4 The LRM states that it is an error for multiple processes to write to an sc_signal. The LRM also says it is ok to write to a sc_signal during elaboration in order to initialize it. The category of writes that the LRM does not specify are mainly those coming from sc_main and from other unusual places, like phase callbacks (e.g, start_of_simulation), a channel's update phase etc. This can all be generalized under the umbrella of writing to a signal from a non-process context (the current process is 0). We think extrapolating what the LRM says, all such writes to a signal can be considered ok, and do not contribute as a driver. It seem reasonable to have a consistent rule like sc_main (generally, any non-process context) never counts as a writer when writing to a signal. [JA] My take on it is that the LRM is explicit that sc_signal::write() calls request_update(), that update() changes the value and notifies an event. Moreover, request_update() can be called during elaboration (it is explicitly allowed in the callbacks). When multiple writes() are made in the same phase, the most recent write() wins. Writes from multiple processes instances are forbidden, the (implicit, unstated) reason being that the behaviour would be non-deterministic. Writes from outside processes are permitted, the (implicit, unstated) reason being that the behavior would be deterministic (because elaboration is single-threaded). I think that hangs together? However, I don't think there is a strong reason to change the OSCI simulator. Old code should still work (because the OSCI sim supports many deprecated features). New code should comply with 1666 (whether or not a tool implements the checks properly). In time, the OSCI sim may get changed to be less tolerant of legacy code. [AG] The existing simulator ignores initializations by sc_main if they occur before any process writes, but reports an error if sc_main tries to change a value that has been written by the process. I think I should just fix things to ignore writes out of the simulator's purview, e.g. by sc_main. [BB/JA] Need an LRM clarification to that effect [BB] 6.4.4 The LRM clearly states that having multiple processes write to a sc_signal is an error. In the refsim, the implementation is somewhat flaky. The multiple writer check is performed only if the DEBUG_SYSTEMC macro is defined and is liable to break for pre-defined signals like sc_signal etc., if the systemc library is not compiled with DEBUG_SYSTEMC defined. It would be nice to have a cleaner solution. From user experience, it turns out that people hardly ever have DEBUG_SYSTEMC on, and are often surprised in the case that DEBUG_SYSTEMC is on, and the check reports an error. It is not uncommon for a model to use to two different processes that write to the same signal in a mutually exclusive fashion, e.g., at the posedge and negedge of a clock, respectively. People expect such models to work and are surprised that the check rejects this. In view of all of the above, in the past, Stuart has proposed that this fact be made explicit in the signal declaration by adding a second template parameter to sc_signal that specifies if multiple writers are ok or not, with the default value being not ok. I'm trying out an implementation at my end that explores this option. enum sc_multiple_sig_writers_policy { SC_MULTIPLE_WRITERS_NOT_OK, // default SC_MULTIPLE_WRITERS_OK }; template class sc_signal { ... }; If the template parameter value is SC_MULTIPLE_WRITERS_NOT_OK, then sc_signal::write() checks for multiple writers and reports an error if more than one process writes to it. This check always happens irrespective of the DEBUG_SYSTEMC flag. If the template parameter value is SC_MULTIPLE_WRITERS_OK, then sc_signal::write() allows multiple writers and the last one wins. This is the basic proposal for sc_signal. Typically, users won't bother with the second template parameter, and by default it would perform the check, thus being LRM compliant. In the cases where users do want to have the convenience of multiple writers, they can specify the second template parameter value to be SC_MULTIPLE_WRITERS_OK. A related topic is for IP vendors writing a module with output ports - these output ports will be bound to signals externally and is not controlled by the IP vendor. However, the IP vendor may also choose to have the convenience of using multiple processes to write to an output port, in which case it will be an error if the sc_signal bound to that port does not allow multiple writers. There should then be a way for an IP vendor to specify that you can only connect a sc_signal that allows multiple writers to this output port. This is similar to the concept of the sc_out_resolved port, which can only be bound to a sc_signal_resolved. The port proposal is template class sc_out_mw : public sc_out { ... }; template class sc_inout_mw : public sc_inout { ... }; Such a port cannot be bound to a resolved signal (sc_signal_resolved or sc_signal_rv) because it is inconsistent for the port to say I've multiple processes writing to me and the last one wins, and at the same time ask for resolution Such a port cannot be bound to a sc_signal because it will generate a run time error due to multiple processes writing to the signal via the port Such a port can only be bound to a sc_signal In its end_of_elaboration, this port type will check that it is bound to the correct type of sc_signal [AF] 6.10.5 Typo in prototype of function sc_trace. The second argument should have type const sc_inout&, not const sc_in& [JA] 6.27 and 6.29 sc_mutex and sc_semaphore Each of these two classes is derived from sc_prim_channel, implying that they cannot be instantiated during simulation. This is an unnecessary restriction which prevents the dynamic creation of mutexes and semaphores for use in synchronizing dynamic processes. [DB] 6.27 sc_mutex I sent patches in for this sc_mutex/sc_semaphore bug... Basically, a if a process locks an sc_mutex and then subsequently locks it again, the simulation will lock up. Problem is that the corresponding software mutex does not have this behavior, and software folks would not expect this even though technically this seems like a violation of mutex API (calling lock twice). It was suggested that this should succeed, and simply issue an SC_WARNING. [BB] 6.29 sc_semaphore The term "number of keys" should be defined upfront as the permitted number of concurrent accesses to the semaphore, specified when constructing the semaphore. post() If the semaphore value is equal to the number of keys (i.e. the semaphore is unlocked), member function post() shall return the value -1. If the semaphore was not locked by the calling function (via a wait() or trywait() call), member function post() shall return the value -1. If the semaphore was locked by the calling process, member function post shall increment the semaphore value, and return the value 0. If processes exist that are suspended and are waiting for the semaphore value to be incremented, exactly one of these processes shall be permitted to decrement the semaphore value (the choice of process instance being nondeterministic) while the remaining processes shall suspend again. This shall be accomplished within a single evaluation phase; that is, an implementation shall use immediate notification to signal the act incrementing the semaphore value to any waiting processes. [JA] 6.29.4 Typo in description of "post". Says "the act incrementing". Should say "the act of incrementing". [AF] 6.30.4 The default constructor is illegal, since according to 5.3.3, a constructor for a class derived from sc_module is required to a have exactly one parameter of type sc_module_name. There is a simple solution for fixing sc_event_queue. Collapse the two signatures: sc_event_queue(); sc_event_queue( sc_module_name name_ ); onto the single signature: sc_event_queue( sc_module_name name_ = sc_gen_unique_name( "event_queue" ) ); [DG] 7.2.1 Typo. "principal" should be "principle" [DL] 7.2.3 The LRM is correct but I think it should recommend that applications should call member function end for any deferred contexts that have been activated by calling their begin member function. The problem is due to the order in which the context destructors are called when the contexts go out of scope. This will be the order in which the contexts were created. Since the destructor attempts to restore the previously active context, this order in which they are called is important. [AF] 7.3 The OSCI reference simulator accepts case insensitive string literal prefixes i.e. "0b" "0B" "0d" "0D" and so on. However the LRM seems to allow only lower case [BP] 7.4 We recently noticed that the signatures of IEEE 1666-documented functions sc_value_base::concat_get_data() and sc_value_base::concat_get_ctrl() have changed in OSCI 2.2 after 64 bit porting. These used to take an "unsigned long*" argument, now these take a "sc_digit*" argument. Should this be documented anywhere (like in RELEASENOTES Incompatibilities section, in IEEE Errata)? Do you know if there are there other places that are affected by the introduction of the sc_digit class? [AG] Anywhere we passed a unsigned long* in a signature now uses sc_digit*, which is currently an unsigned int. This was necessary for 64 bit support because long tracks the size of a pointer in C/C++. So on a 64-bit machine a long or unsigned long is 64 bits not 32 bits. The files listed below contain instances of the sc_digit type: sysc/datatypes/bit/sc_bit_proxies.h sysc/datatypes/bit/sc_bv_base.cpp sysc/datatypes/bit/sc_bv_base.h sysc/datatypes/bit/sc_lv_base.cpp sysc/datatypes/bit/sc_lv_base.h sysc/datatypes/bit/sc_proxy.h sysc/datatypes/int/sc_int_base.cpp sysc/datatypes/int/sc_int_base.h sysc/datatypes/int/sc_nbcommon.inc sysc/datatypes/int/sc_nbdefs.h sysc/datatypes/int/sc_nbexterns.cpp sysc/datatypes/int/sc_nbexterns.h sysc/datatypes/int/sc_nbfriends.inc sysc/datatypes/int/sc_nbutils.cpp sysc/datatypes/int/sc_nbutils.h sysc/datatypes/int/sc_signed.cpp sysc/datatypes/int/sc_signed.h sysc/datatypes/int/sc_signed_subref.inc sysc/datatypes/int/sc_uint_base.cpp sysc/datatypes/int/sc_uint_base.h sysc/datatypes/int/sc_unsigned.cpp sysc/datatypes/int/sc_unsigned.h sysc/datatypes/int/sc_unsigned_subref.inc sysc/datatypes/misc/sc_concatref.h sysc/datatypes/misc/sc_value_base.cpp sysc/datatypes/misc/sc_value_base.h systemc.h [BP] My point was, users with code that uses concat_get_data() and concat_get_ctrl() and passes in an "unsigned long*" as the first argument, will no longer compile with the latest 2.2 kit. They have to change their code to pass in a "sc_digit*" instead. The chances of users running into this are likely minor. Its kind of implied in your 64-BIT LINUX section below, I think it will be nice to make this explicit by adding a couple of lines to the section below - like "For example, concat_get_data() ...". WDYT? - SUPPORT FOR 64-BIT LINUX 64-bit native mode for Linux is now a SystemC platform. The implementation uses Posix threads for thread support. The major code change is the use of the sc_digit data type to represent values that were previously unsigned long. These occurred in the concatenation code and the internal representations for sc_signed, sc_unsigned, bit classes, and some fixed point classes." [DL] sc_value_base is in the LRM, but as a daggered class. I suspect this is a detail that will go completely unnoticed by the majority of SystemC users so my initial response is that it is an area where the OSCI simulator is ahead of the LRM (there was no support for 64-bit linux platforms at the time that the LRM was written). I would suggest adding an errata to the effect that the unsigned long* arguments to concat_get_data and concat_get_control have been replaced by the implementation-dependent type sc_digit in order to provide support for platforms where the length of an unsigned long variable may be greater than 32 bits. I would regard the removal from the LRM of references to "standard" C++ types that have different sizes on different platforms as an enhancement for a future revision - for the time being I think an errata would be fine. [DVC, correction by DL] 7.6.4.7 The text in 7.6.4.7 is incorrect - it should be similar to the description of sc_signed bitwise shift operators in 7.6.3.7. I think the two paragraphs before Table 13 should be replaced by: "The left shift operator shall return a result with a word length that is equal to the word length of its sc_unsigned operand plus the right (integer) operand. Bits added on the right-hand side of the result shall be set to zero. The right shift operator shall return a result with a word length that is equal to the word length of its sc_unsigned operand. Bits added on the left-hand side of the result shall be set to zero. The behavior of a shift operator is undefined if the right operand is negative." [SS] 7.9 In the IEEE LRM, sc_lv<> and sc_lv_base don't have conversion function to_uint64(), but these are in the ref sim and are also mentioned in the "common characteristics" section on the data types. Perhaps there are other explicit conversion functions that were inadvertently omitted? [AF] 7.9 In the data types section, there are inconsistencies in the conversion functions to_uint64 and to_int64. Each of the integer or vector types says something like "See section 7.2.9...". However they then list the conversions under each data type, but not always consistently with 7.2.9. See for instance page 246, section 7.9.3.2 - to_int64 and to_uint64 are missing, but on page 248 section 7.9.3.6 it says "Member functions that return the integer equivalent of the bit representation shall be provided to satisfy the requirements of 7.2.9". I think this affects sc_bv_base and sc_lv_base at least. [DL] to_int64() and to_uint64() are missing from all of the classes in section 7.9. These operators should be added for the following classes: sc_bv_base sc_logic_base sc_subref_f sc_concref_r In the OSCI simulator these methods are provided by the sc_proxy class. [AF] 7.12.2 The meaning of enum literal SC_NOBASE is not defined anywhere in the LRM. [BB] 8.1 Trace Files. ------------------------------------------------------------------------------- The trace_cycle(true) call within crunch() is for recording glitches at the same time, but the trace_cycle(false) call after calling crunch() in sc_simcontext::simulate() is for recording the final value of the signal at that time. IEEE 1666 does not document the nuances here, so we are up to the implementation interpretation. trace_cycle(false) should only be called before advancing time, not only for performance but for semantic correctness too - only then is it guaranteed to have recorded the final value at that time. Even before the semantics change of sc_start(0) to do one delta from all deltas, the trace_cycle(false) call after crunch() was strictly speaking not correct. The old code is given below. sc_start(0) would do all the deltas at the current time and then go back to sc_main, at this point we don't know that there will be no more activity at the current time, hence trace_cycle(false) should not be called. The invocation of trace_cycle(false) further below just before advancing time is correct. sc_simcontext::simulate() { .... if ( duration == SC_ZERO_TIME ) { ...... crunch(); ..... if( m_something_to_trace ) trace_cycle( /* delta cycle? */ false ); <--------------- incorrect ..... } // NON-ZERO DURATION: EXECUTE UP TO THAT TIME: do { ...... crunch(); ...... if( m_something_to_trace ) { trace_cycle( false ); <----------------------------- correct } ....... do { t = next_time(); <----------------------- advance time ... } ........ } ......... } With the semantic change of sc_start(0) to do one delta cycle, calling trace_cycle(false) after every eval+update is clearly not right. If you have a series of sc_start(0) in sc_main interspersed with signal writes you will get multiple "final values" dumped to the trace file. However, the problem is if we don't call trace_cycle(false) before returning to sc_main from a sc_start(0), then there is the possibility that sc_main() may close the trace file and exit, then we never get the chance of writing out the final value to the trace file via trace_cycle(false). This has some similarities with the "when to do end_of_simulation callbacks" issue. In that case we had decided the end_of_simulation callbacks will only be invoked if a sc_stop was hit. We can say something similar here - for sc_start(0) calls, final values for traced objects will not be dumped out before the call returns; rather the final value will be dumped out at the next sc_start() call or sc_stop() call. We need to discuss this further. I researched this extensively today and ran into another inaccuracy. The first call to trace_cycle(false) causes initial values to be written out ($dumpvars in vcd files). This first call is not exactly happening at initial time but a little later after simulation has started and deltas have run - after the first crunch. This makes it miss some transitions at time 0. A good place to insert the initial call is in prepare_to_simulate() after the update call. The test /vobs/lima_test/test/automatic/reference_2.1/tracing/vcd_trace/test13 demonstrates the difference. // update phase m_execution_phase = phase_update; m_prim_channel_registry->perform_update(); if (m_signal_registry) { sc_signal_registry_base::perform_all_updates(m_signal_registry); } m_execution_phase = phase_notify; // dump initial values of traced objects if( m_something_to_trace ) { trace_cycle( false ); } ------------------------------------------------------------------------------- [AF] 8.1.6 There is a mistake in the 4 overloadings of sc_trace for sc_dt::int64&, int64*, uint64&, and uint64*. The final argument should include the expression sizeof(sc_dt::int64) etc, not sizeof(long). [SS,BB] 8.3 Page 390, very bottom. SC_DEFAULT_ERROR_ACTIONS needs SC_DISPLAY added in. This is a bug also in the current 2.1v1 simulator -- it needs to be added into sc_report.h. I've toyed with this before, its not exactly a bug, although apparently thats how it appears. SC_DEFAULT_ERROR_ACTIONS has SC_THROW in it and the error message is embedded in the exception thrown. The practice is to catch the exception and at that point print out the message catch( const sc_report& ex ) { cout << "\n" << ex.what() << "\n"; } This is how the kernel always handles errors and the user too. It is not entirely unreasonable because a catch block HAS to be there for the program to not crash, and at that point printing out the message seems ok. Should LRM obligate ALL handlers that catch sc_reports to do a Display (both implementation and those in application ?) And if so shouldn't there be a way to insure that SC_DISPLAY action and sc_report_handler::handler_func is actually performed, not just stream to cout? Should LRM explicitly recommend that SC_THROW and SC_DISPLAY are never used together? Should it recommend that SC_THROW is used in isolation and that everything else (eg SC_LOG, SC_DISPLAY is handled by code catching the report ? ) -------------------------------------------------------------------------------