C++, volatile member functions, and threads

C++, volatile member functions, and threads

Post by Eric M. Hoppe » Sun, 29 Jun 1997 04:00:00



After realizing that you could make member functions volatile as
well as const, it occured to me that 'thread-safe' member functions
ought to be marked as volatile, and that objects shared between several
threads ought to be declared as volatile.

        Perhaps the ability to declare member functions volatile is
limited to IBMs C++/Set for OS/2, or g++ 2.7.2.1, but I would guess not.

        My reasoning on this is that if you have a member function that
can be called by multiple threads, it should treat the member variables
as volatile (because multiple threads might be accessing them) until it
obtains a mutex or lock of some sort on the member variables.

        Declaring objects as volatile that are accessed by multiple
threads is a simple extension of declaring variables of basic types to
be volatile when they are accessed by multiple threads.

        This doesn't seem to be current practice.  Is there a good
reason for this that I'm missing, or is it simply an oversight?

Have fun (if at all possible),
--
This space for rent.  Prospective tenants must be short, bright, and
witty.  No pets allowed.  If you wish to live here, or know someone who
does, you may contact me (Eric Hopper) via E-Mail at

    -- Eric Hopper, owner and caretaker.

 
 
 

C++, volatile member functions, and threads

Post by Bryan O'Sulliva » Sun, 29 Jun 1997 04:00:00


e> After realizing that you could make member functions volatile as
e> well as const, it occured to me that 'thread-safe' member functions
e> ought to be marked as volatile, and that objects shared between
e> several threads ought to be declared as volatile.

No and no.

e> My reasoning on this is that if you have a member function that can
e> be called by multiple threads, it should treat the member variables
e> as volatile (because multiple threads might be accessing them)
e> until it obtains a mutex or lock of some sort on the member
e> variables.

Your code should not be inspecting or changing the values of variables
in any way unless you have obtained mutexes on those variables.  The
semantics of mutexes under POSIX threads, at least, guarantee that the
values of your variables will be sane once you obtain appropriate
mutexes (this is a very short gloss over what really goes on, but it's
the right basic idea).

Declaring your variables volatile will have no useful effect, and will
simply cause your code to run a *lot* slower when you turn on
optimisation in your compiler.

        <b

--
Let us pray:




 
 
 

C++, volatile member functions, and threads

Post by Eric M. Hoppe » Mon, 30 Jun 1997 04:00:00




>> After realizing that you could make member functions volatile as
>> well as const, it occured to me that 'thread-safe' member functions
>> ought to be marked as volatile, and that objects shared between
>> several threads ought to be declared as volatile.

> No and no.

        *chuckle* I see.  I disagree, but will state my reasons below.

Quote:>> My reasoning on this is that if you have a member function that can
>> be called by multiple threads, it should treat the member variables
>> as volatile (because multiple threads might be accessing them)
>> until it obtains a mutex or lock of some sort on the member
>> variables.

> Your code should not be inspecting or changing the values of variables
> in any way unless you have obtained mutexes on those variables.  The
> semantics of mutexes under POSIX threads, at least, guarantee that the
> values of your variables will be sane once you obtain appropriate
> mutexes (this is a very short gloss over what really goes on, but it's
> the right basic idea).

        Let me provide an example.  It, perhaps, isn't the best example,
but I think it illustrates my point:

-------------------------
class TS_RefCounter {
 public:
   // The following must be a type that the compiler can generate a
   // single instruction fetch for.
   typedef unsigned int count_t;

   inline void AddReference() volatile;
   inline void DelReference() volatile;
   inline count_t NumReferences() volatile;

 private:
   count_t _refct;
   mutex_t _mutex;

Quote:};

inline void TS_RefCounter::AddReference() volatile
{
   if (NumReferences() <= 1) {
      // Reference count <= 1, so only owned by one thread.
      _refct++;
   } else {
      mutex_lock(&_mutex);
      _refct++;
      mutex_unlock(&_mutex);
   }

Quote:}

inline void TS_RefCounter::DelReference() volatile
{
   count_t local_refct = _refct;

   if (local_refct <= 1) {
      // Reference count <= 1, so only owned by one thread.
      if (local_refct > 0) {
         _refct--;
      }
   } else {
      mutex_lock(&_mutex);
      _refct--;
      mutex_unlock(&_mutex);
   }

Quote:}

// No mutex lock needed.  Only reading value with atomic instruction.
inline count_t TS_RefCounter::NumReferences() volatile
{
   return(_refct);
Quote:}

-------------------------

        This use can be attacked for one major reason.  Relying on
unsigned int to be a type that is read atomically by the CPU is a shaky
assumption that happens to be true on a large number of platforms.  The
code has the possibility of being brittle for this reason.  Also, the
trick of not obtaining a mutex when the reference count indicates that
more than one thread 'can't' own the object is also possibly somewhat
error-prone unless the reference count is strictly maintained.

        Another case in which you might want to do this is when you want
to have the same class work in both threaded and non-threaded contexts.
You could overload the public and protected member functions on whether
or not they were volatile.  The volatile ones could just obtain a mutex
and call the non-volatile version after a const_cast to get rid of the
volatile.

Quote:> Declaring your variables volatile will have no useful effect, and will
> simply cause your code to run a *lot* slower when you turn on
> optimisation in your compiler.

        *nod* That's why you should use const_cast, and call
non-volatile versions after you've obtained a mutex.

Have fun (if at all possible),
--
This space for rent.  Prospective tenants must be short, bright, and
witty.  No pets allowed.  If you wish to live here, or know someone who
does, you may contact me (Eric Hopper) via E-Mail at

    -- Eric Hopper, owner and caretaker.

 
 
 

C++, volatile member functions, and threads

Post by Dave Butenho » Tue, 01 Jul 1997 04:00:00





> >> After realizing that you could make member functions volatile as
> >> well as const, it occured to me that 'thread-safe' member functions
> >> ought to be marked as volatile, and that objects shared between
> >> several threads ought to be declared as volatile.

> > No and no.

>         *chuckle* I see.  I disagree, but will state my reasons below.

Sorry, but Bryan's right.

Quote:>         Let me provide an example.  It, perhaps, isn't the best example,
> but I think it illustrates my point:

> inline void TS_RefCounter::AddReference() volatile
> {
>    if (NumReferences() <= 1) {
>       // Reference count <= 1, so only owned by one thread.
>       _refct++;
>    } else {
>       mutex_lock(&_mutex);
>       _refct++;
>       mutex_unlock(&_mutex);
>    }
> }

>         This use can be attacked for one major reason.  Relying on
> unsigned int to be a type that is read atomically by the CPU is a shaky
> assumption that happens to be true on a large number of platforms.  The
> code has the possibility of being brittle for this reason.  Also, the
> trick of not obtaining a mutex when the reference count indicates that
> more than one thread 'can't' own the object is also possibly somewhat
> error-prone unless the reference count is strictly maintained.

OK, "brittle". But you've underestimated how "brittle" the code is. It's
not just a matter of assuming that the read is atomic. You're assuming
that the compiler will generate SMP-atomic fetch/increment/store
sequences. Not likely! Plus, you've actually got a fetch/test
(NumReferences) followed by a separate fetch/increment/store -- the
entire sequence would have to be atomic for this transition from
"unthreaded" to "threaded" to work correctly (and, if it was, you
wouldn't need the mutex anyway). Your final caveat is absolutely
correct, and makes the entire adventure pointless -- "somewhat
error-prone unless the reference count is strictly maintained".
Absolutely. But you can only "strictly maintain" the reference count by
using a mutex, every time.

Two threads may both call AddReference simultaneously, both see _refct
<= 1, both increment _refct, and both store a new (incremented) value.
Your refct is then 2 (for example) when it should be 3. You'll now
switch into mutexed mode for the next reference -- but your reference
count is already wrong. (And you'll incorrectly switch back out of
mutexed mode after the next dereference.) This is possible on a
uniprocessor, not just on a multiprocessor, because your thread could be
timesliced between the fetch and the store.

Making the variable "volatile" doesn't help, but it does prevent the
compiler from doing its job.

Quote:>         Another case in which you might want to do this is when you want
> to have the same class work in both threaded and non-threaded contexts.
> You could overload the public and protected member functions on whether
> or not they were volatile.  The volatile ones could just obtain a mutex
> and call the non-volatile version after a const_cast to get rid of the
> volatile.

If you're writing an APPLICATION, (not a library), and you want to
choose whether to use threads at runtime, then, sure, you can use
non-thread-safe versions for non-threaded runs. But the non-threaded
versions don't need volatile and volatile won't help the threaded
versions.

Don't try this with a LIBRARY, though, (or an application that supports
callbacks of some sort from a library), because in this case your caller
is in charge of whether you're threaded, and you can't necessarily even
tell.

Note that Digital UNIX and Solaris (at least) provide ways to help you
write thread-safe libraries that minimize the overhead of
synchronization when running in a non-threaded process. Solaris provides
thread entry points in libc that are "stubs" -- just returning
immediately. The entry point symbols are preempted by the read thread
functions when the thread libraries are activated. (At least if they're
activated in the right order.) Digital UNIX takes a different route,
with a "TIS" (thread-independent services) API in libc, which has
non-thread stubs that are revectored dynamically into the thread library
when it initializes. (This works even for dynamic activation, and a
mutex, for example, that's locked by the stub tis_mutex_lock will still
be locked after activating the thread library, and can then be unlocked
successfully by the initial thread.)

/---------------------------[ Dave Butenhof ]--------------------------\

| 110 Spit Brook Rd ZKO2-3/Q18       http://members.aol.com/drbutenhof |
| Nashua NH 03062-2698       http://www.awl.com/cp/butenhof/posix.html |
\-----------------[ Better Living Through Concurrency ]----------------/

 
 
 

C++, volatile member functions, and threads

Post by Patrick TJ McPh » Wed, 02 Jul 1997 04:00:00





% >

% > >

% > >
% > >> After realizing that you could make member functions volatile as
% > >> well as const, it occured to me that 'thread-safe' member functions
% > >> ought to be marked as volatile, and that objects shared between
% > >> several threads ought to be declared as volatile.
% > >
% > > No and no.
% >
% >         *chuckle* I see.  I disagree, but will state my reasons below.
%
% Sorry, but Bryan's right.

Just to throw in my hundreds if not thousands of dollars worth, Eric's
idea has some merit, and Bryan did not give any particular reason
for disagreeing. All he said was `no, use mutexes, and volatile just
stops the compiler from optimising', which really isn't a correct
answer to Eric's suggestion. So, Bryan is wrong.

Now, it seems to me that Eric gave an example, which I've deleted but
it's probably still on your news server if you care to look, in which
he tried to demonstrate what he was talking about, but failed because
his example was not `thread safe'. Dave, in the message I'm replying to,
got caught up in pointing out that the example wasn't very good, and
ignored the original question.

Eric's idea will work if you design the class correctly. It cannot have
any public member variables, otherwise they could be accessed in a
non-thread-safe way. Any member functions you mark as volatile
have got to really be thread-safe. This means they have to be either
static or wrapped in mutexes.

I guess the advantage of this would be that, if you have some class in
which there are non-thread-safe methods, but there are also thread-safe
methods which would be useful in a global instance of the class, you
wouldn't have to worry about putting extra mutexes around calls to the
class, since the compiler will stop you from using the unsafe methods.

The disadvantage would be that you couldn't use any of the non-thread-safe
methods at all, and if there's some place where you use the class a lot,
you'd be going down on the mutex, then up, then down, then up, then
down and down and up and down and down and up and up and up, then down,
up, down, down, up up... I'm lost now, but that has to be more expensive
than going down on the mutex once, using the class a bunch, then going up
again.

Furthermore, if your actual global variable is really a pointer, this
won't give you anything -- you'll still have to wrap mutexes around
every access to the pointer.

I think Eric's idea is not bad, but probably won't buy you much, and
could easily lull you into a false sense of security. On the other
hand, since I've now suggested that everyone else who's contributed
to this thread is wrong in one way or another and I want to avoid
suggesting that I think _I'm_ right, what the hell is the volatile
keyword for if not for this situation? Isn't it to prevent a compiler
from doing something like this:

 myhappyclass x;

 myhappyfunction()
 {
    int y;
    x.down();
    y = x.y;
    x.up();

    /* while this is going on, another thread changes x.y */
    blahblahblah(y);

    x.down();
    y = x.y;            /* optimiser drops this as unnecessary */
    x.up();

    blahblahblahblahblah(y);
 }

if x isn't marked as volatile, then it seems to me an optimiser might
drop the second assignment to y, and blahblahblahblahblah() will be
called with the wrong argument. So maybe Eric is right after all.
--

Patrick TJ McPhee
East York  Canada

 
 
 

C++, volatile member functions, and threads

Post by Bryan O'Sulliva » Wed, 02 Jul 1997 04:00:00


p> Just to throw in my hundreds if not thousands of dollars worth,
p> Eric's idea has some merit, and Bryan did not give any particular
p> reason for disagreeing.

I discussed my reasons with Eric in email.

p> All he said was `no, use mutexes, and volatile just stops the
p> compiler from optimising', which really isn't a correct answer to
p> Eric's suggestion.  So, Bryan is wrong.

No, Bryan is right.

[Much blather deleted.]

        <b

--
Let us pray:



 
 
 

C++, volatile member functions, and threads

Post by Dave Butenho » Thu, 03 Jul 1997 04:00:00



> p> All he said was `no, use mutexes, and volatile just stops the
> p> compiler from optimising', which really isn't a correct answer to
> p> Eric's suggestion.  So, Bryan is wrong.

> No, Bryan is right.

> [Much blather deleted.]

As Bryan so eloquently stated, "No, Bryan is right". He did give the
correct answer, and anyone ignoring the advice does so at their own
(substantial) peril.

Look, I'm sorry to come down on someone, and it's noble of you to stand
up against us naysayers. But the dangerous misinformation was
distributed as fact and as a "cool optimization." And if that
misinformation were not corrected a lot of innocent people would get
hurt. Being "noble" doesn't make you, or Eric, right.


> Dave, in the message I'm replying to,
> got caught up in pointing out that the example wasn't very good, and
> ignored the original question.

No, I "got caught up" in trying to explain what was wrong with the
concept, which WAS the original "question" (though it was more of an
assertion than a question).

You cannot avoid synchronization when you think there may be only one
thread using the object, because TWO threads (or more) may do that at
the exact same time. The result is no synchronization at all, and this
is bad. You can only KNOW there is only one thread using the object if
you've made that decision using appropriate synchronization.

It's true that "synchronization" doesn't always mean a mutex. If your
situation is sufficiently constrained and well-understood, and if you're
very careful, there are some cases where you can rely on subtle
implications of the POSIX memory rules to avoid a mutex. Such cases are
few and far between, are not generally or widely applicable, and are far
more error prone and complicated than some might be lead to believe by
looking at simplistic checks for "reference count greater than one".

Yes, the code as posted might work, if that class is used only in
carefully controlled circumstances. The necessary restrictions clearly
were not envisioned by the author, because they invalidate the apparent
generality of the "reference count" model. It would work only if the
reference count were incremented beyond 1 (forcing thread safe behavior)
by the main program BEFORE any threads were created. This would ensure
that all threads see a value greater than 1 and use the mutex as they
should. (It'll also work if the application architecture ensures that an
object will only be used in one thread, but that's not very
interesting.) This code will never work (except by sheer accident) if
the class is in a library, because the library can't know whether more
than one thread exists or may choose to use the object.

That's why the only real answer is "don't do that". If you "know" it'll
work in your case, fine. It's your code. But, in my professional and
expert opinion, the risk far outweighs any possible benefit. Because
even if you get it right, someone else who doesn't understand all the
details will apply the code in an inappropriate way later and break the
program.

/---------------------------[ Dave Butenhof ]--------------------------\

| 110 Spit Brook Rd ZKO2-3/Q18       http://members.aol.com/drbutenhof |
| Nashua NH 03062-2698       http://www.awl.com/cp/butenhof/posix.html |
\-----------------[ Better Living Through Concurrency ]----------------/

 
 
 

C++, volatile member functions, and threads

Post by Bryan O'Sulliva » Thu, 03 Jul 1997 04:00:00


p> [this is Eric now:]
p> % After realizing that you could make member functions volatile as
p> % well as const, it occured to me that 'thread-safe' member functions
p> % ought to be marked as volatile, and that objects shared between several
p> % threads ought to be declared as volatile.

p> Why shouldn't these objects be considered volatile?

Because POSIX memory semantics make it unnecessary to declare them as
volatile, and because declaring variables as volatile will inhibit
several compiler optimisations.  You gain nothing in safety and lose
in performance.

        <b

--
Let us pray:



 
 
 

C++, volatile member functions, and threads

Post by Patrick TJ McPh » Fri, 04 Jul 1997 04:00:00




[...]


% >
% > Dave, in the message I'm replying to,
% > got caught up in pointing out that the example wasn't very good, and
% > ignored the original question.
%
% No, I "got caught up" in trying to explain what was wrong with the
% concept, which WAS the original "question" (though it was more of an
% assertion than a question).

The original question, and I struggled with that word, was

[this is Eric now:]
% After realizing that you could make member functions volatile as
% well as const, it occured to me that 'thread-safe' member functions
% ought to be marked as volatile, and that objects shared between several
% threads ought to be declared as volatile.

Later, he introduced his non-thread-safe member function as an example
of this, but that's nothing to do with the original concept. So far
as I can see, the original posting didn't mention optimisation, and
the idea has nothing to do with optimisation.

I've deleted the rest of Dave's post, but I'll say again that it
doesn't address either the subject line of this thread or
the point of Eric's paragraph quoted above.

So sure, let's all agree that we shouldn't assume certain operations
will be done in a single instruction and are therefore safe. Let's
assume that you have to provide for thread synchronisation if you're
going to update shared objects. Now what about the original question.
Why shouldn't these objects be considered volatile?

--

Patrick TJ McPhee
East York  Canada

 
 
 

C++, volatile member functions, and threads

Post by Dave Butenho » Fri, 04 Jul 1997 04:00:00


I really should ignore this deepening rat-hole. But, OK, just one more
try...


> Later, he introduced his non-thread-safe member function as an example
> of this, but that's nothing to do with the original concept. So far
> as I can see, the original posting didn't mention optimisation, and
> the idea has nothing to do with optimisation.

They exhibit the same essential flaw, the same misunderstanding of "what
it all means". This is of course no surprise, since the EXAMPLE was
indeed an example of the CONCEPT. It was substantially easier, and more
direct, to explain why the example wouldn't behave correctly than to
work with the generic concept. But they're the same thing, and I don't
see how you could consider comments on the example "irrelevant" to the
concept it demonstrates.

And as for "optimization"... if one is not interested in "optimizing",
then one has no motive whatsoever to risk the ill will of memory systems
by avoiding a mutex where it's appropriate to use a mutex. Therefore,
the only conceivable motivation for Eric's CONCEPT and EXAMPLE is a
desire to optimize. Thus, while he may or may not have used the word
"optimize", it would have been pointless to ignore that aspect of the
post.

Quote:> So sure, let's all agree that we shouldn't assume certain operations
> will be done in a single instruction and are therefore safe. Let's
> assume that you have to provide for thread synchronisation if you're
> going to update shared objects. Now what about the original question.
> Why shouldn't these objects be considered volatile?

The use of "volatile" is not sufficient to ensure proper memory
visibility or synchronization between threads. The use of a mutex is
sufficient, and, except by resorting to various non-portable machine
code alternatives, (or more subtle implications of the POSIX memory
rules that are much more difficult to apply generally, as explained in
my previous post), a mutex is NECESSARY.

Therefore, as Bryan explained, the use of volatile accomplishes nothing
but to prevent the compiler from making useful and desirable
optimizations, providing no help whatsoever in making code "thread
safe". You're welcome, of course, to declare anything you want as
"volatile" -- it's a legal ANSI C storage attribute, after all. Just
don't expect it to solve any thread synchronization problems for you.

Because of this flaw in reasoning, Eric's EXAMPLE of his CONCEPT was
neither correct nor an optimization.

I'd like to stop beating this to death. It's not fair to Eric, who
merely had the misfortune to be someone (like probably 95% of everyone
else) who didn't understand the intricicies of SMP memory systems and
thread synchronization. He proposed a shortcut, he was corrected, and I
suspect he (and certainly I) would like to move on to other matters and
stop dragging this (and him) through the dust. Please?

/---------------------------[ Dave Butenhof ]--------------------------\

| 110 Spit Brook Rd ZKO2-3/Q18       http://members.aol.com/drbutenhof |
| Nashua NH 03062-2698       http://www.awl.com/cp/butenhof/posix.html |
\-----------------[ Better Living Through Concurrency ]----------------/

 
 
 

C++, volatile member functions, and threads

Post by Tom Payn » Fri, 04 Jul 1997 04:00:00



[...]
: So sure, let's all agree that we shouldn't assume certain operations
: will be done in a single instruction and are therefore safe. Let's
: assume that you have to provide for thread synchronisation if you're
: going to update shared objects. Now what about the original question.
: Why shouldn't these objects be considered volatile?

When an object is declared to be volatile:
  * its values at sequence points become part of the program's behavior
    (which necessitates a lot of storing)
  * all register resident copies of its value become stale after a
    sequence point (which necessitates a lot of loading).  
The basic idea is that the variable might be an I/O register
controlling an external device and/or subject to asynchronous changes
by an external device.

Variables shared among uncoordinated threads suffer from exactly the
same problem, but, as I understand things, it is part of the POSIX
standard that, after acquiring a mutex, a thread will see only the
latest values of all shared variables.  This requirement might pose
difficulties with intermodule global register allocation.  With
standard compilation technology, all variables get flushed from
registers at a call to any function in another module.  

Tom Payne

 
 
 

C++, volatile member functions, and threads

Post by Bryan O'Sulliva » Tue, 08 Jul 1997 04:00:00


e> Why should I pay the overhead of mutex locking for a local object
e> declared on the stack who's reference isn't passed to any other
e> functions just because I happen to have more than one thread in my
e> program?

You don't, nor did anyone state implicitly or explicitly that you do.

        <b

--
Let us pray:



 
 
 

1. volatile -- what does it mean in relation to member functions?

Hi,

'volatile' has an effect similar to 'const' for constructed types. Using a
'volatile' object, you can only access 'volatile' members (except for native
members variables, see below).

Because volatile qualifier doesn't apply the same way on native types and on
constructed types. On native types (such as 'int'), it simply tells the
compiler not to optimize the variable access. On constructed types, it acts
just like const - restrict access to qualifed members.

const_cast can be used to get rid of the volatile qualifier as well.

I suggest the following article:
http://www.cuj.com/experts/1902/alexandr.htm?topic=experts

--Bertin


      [ about comp.lang.c++.moderated. First time posters: do this! ]

2. Opinions please...best value in PDM?

3. Volatile member functions

4. TRS-80 MOD I. How do I...

5. volatile member functions?

6. RCS Thermostat Question

7. Threads: using a member function as controlling function

8. Looking for a PC-based mail front-end for IBM PROFs

9. Can a non-volatile C++ object be safely shared amongst POSIX threads?

10. Member Function Pointer to Member Function?

11. casting pointer to member function to pointer to base class member function

12. member functions vs non member functions

13. C++ member functions as start functions?