Effective Concurrency: Prefer Using Active Objects Instead of Naked Threads

This month’s Effective Concurrency column, “Prefer Using Active Objects Instead of Naked Threads,” is now live on DDJ’s website.

From the article:

… Active objects dramatically improve our ability to reason about our thread’s code and operation by giving us higher-level abstractions and idioms that raise the semantic level of our program and let us express our intent more directly. As with all good patterns, we also get better vocabulary to talk about our design. Note that active objects aren’t a novelty: UML and various libraries have provided support for active classes. Some actor-based languages already have variations of this pattern baked into the language itself; but fortunately, we aren’t limited to using only such languages to get the benefits of active objects.

This article will show how to implement the pattern, including a reusable helper to automate the common parts, in any of the popular mainstream languages and threading environments, including C++, C#/.NET, Java, and C/Pthreads.

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

1 The Pillars of Concurrency (Aug 2007)

2 How Much Scalability Do You Have or Need? (Sep 2007)

3 Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

4 Apply Critical Sections Consistently (Nov 2007)

5 Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

6 Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

7 Break Amdahl’s Law! (Feb 2008)

8 Going Superlinear (Mar 2008)

9 Super Linearity and the Bigger Machine (Apr 2008)

10 Interrupt Politely (May 2008)

11 Maximize Locality, Minimize Contention (Jun 2008)

12 Choose Concurrency-Friendly Data Structures (Jul 2008)

13 The Many Faces of Deadlock (Aug 2008)

14 Lock-Free Code: A False Sense of Security (Sep 2008)

15 Writing Lock-Free Code: A Corrected Queue (Oct 2008)

16 Writing a Generalized Concurrent Queue (Nov 2008)

17 Understanding Parallel Performance (Dec 2008)

18 Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

19 volatile vs. volatile (Feb 2009)

20 Sharing Is the Root of All Contention (Mar 2009)

21 Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

22 Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)

23 Eliminate False Sharing (May 2009)

24 Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

25 The Power of “In Progress” (Jul 2009)

26 Design for Manycore Systems (Aug 2009)

27 Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)

28 Prefer structured lifetimes – local, nested, bounded, deterministic (Nov 2009)

29 Prefer Futures to Baked-In “Async APIs” (Jan 2010)

30 Associate Mutexes with Data to Prevent Races (May 2010)

31 Prefer Using Active Objects Instead of Naked Threads (June 2010)

8 thoughts on “Effective Concurrency: Prefer Using Active Objects Instead of Naked Threads

  1. Состоятельный человек ищет себе девушку
    для настоящих
    отношений, параметры параметры: от 18 до 20 лет рост
    до 175 см. волосы светлые, модельное телосложение. личный помощник т.79262036777 Руслан
    теги:
    объявления знакомства
    знакомства через интернет

  2. Thanks for yet another great article Herb.

    I think that instead of using Message objects (for pre C++0x developers) it could be benificial to follow the idiom of generic function callbacks ( your Gotw: 83 et.al ).

    This way you can easily make the Active object and Backgrounder without message objects focus on the core with cleaner code as a result.

    Inspired by your Effective Concurrency Europe 2009 seminar I ended up with something like this :)

    I.e. : (cut-down & pseudo-ish version)
    Class Active {
    private:
    ….
    void run() {
    while(!done){
    smart_ptr msg= mq.pop();
    msg(); // executes msg
    }
    }

    public:
    ….
    ~Active() {
    send(bind(this, &Active::DoDone));
    thd->join();
    }
    ….
    };

    class Background {

    public:
    void Save(smart_ptr data) {
    active->send(bind(this, &Background::DoSave, data);
    }

    };

    Standard pre C++0x but still with the clean look thanks to some template touch :)

    Cheers

  3. @Thomas: Yes, function follows the rule of all function objects — they are intended to be cheap to copy. So if you want to avoid copying state, hold it by pointer/shared_ptr.

  4. Very nice and detailed article. So nice, that I went head ahead and try to use it in one my program.
    From this experience, I get some concern about the C++0X way. While I really like the terseness of it, it seems to be a bit more error-prone and can become inefficient if you don’t pay close attention.

    In my case I wanted to pass a Medium-sized data to an active object. By medium-sized I mean a piece of data big enough that you can afford to copy it one time for the purpose of thread isolation, but NOT many times. (for example a buffer of char[4096])

    It worked very easily with OO style message.
    struct MMediumSized
    {
    MediumSizedData data;
    MMediumSized(const MediumSizedData& data):data(data){}
    void Execute(){…}
    }

    void foo(const MediumSizedData& data)
    {
    a.Send(std::unique_ptr(new MMediumSized(data)));
    }

    The Send() function forced me to do the right thing, thanks to taking an unique_ptr as parameter. The MediumSizedData is copied *once* into the message (in the constructor of MMediumSized) and then the unique_ptr to the message is moved into the message queue, then moved out, get executed and then get automatically destroyed.

    I hit way more problems when moving to lambda and std::function. My first impulse was to do :

    void foo(const MediumSizedData& data)
    {
    a.Send( [&data](){ …} );
    }
    I didn’t pay enough attention that the data was captured by reference, so there is no copy at all which cause data race in the program.
    Then I changed to :

    void foo(const MediumSizedData& data)
    {
    a.Send( [data]() {…} );
    }

    The program was correct, but way slower than the OO version. I put some traces into MediumSizedData copy constructor and realized that every time I called foo() and the message was executed, MediumSizedData get in fact copied 5 times !! It seems that’s because std::function really love to make copy. I cannot figure out what happen exactly in the internals of std::function but is seems that my lambda get copied a LOT inside.

    So I finally realize that you have to use a shared_ptr to get the right semantics :
    void foo(const MediumSizedData& data)
    {
    std::shared_ptr data_copy(new MediumSizedData(data));
    a.Send( [data_copy]() {…} );
    }
    I would prefer an unique_ptr but unfortunately it seems that std::function make always a copy of its internals and never a move.

    TL:DR :
    It’s very clear in the OO style that you dynamically allocate a message and then only manipulate and pass a *pointer* around whereas the std::function way looks more mysterious. What really happen when I construct a std::function from a lambda ? What really happen when I a push a std::function into the message queue ?

  5. I’ve been hoping for some focus on actor-model type concurrency and more focus on futures/messages, etc.
    Thanks Herb =)

  6. Hi,

    Excellent read.
    Are you planning to compile all the column in the book?

    Thanks

Comments are closed.