You are currently browsing the category archive for the ‘Uncategorized’ category.

I’ve been thinking a lot about continuations and directed graphs the last couple days and different ways to implement them.  I’ve also been looking closer at std::thread, std::future and std::async to ensure I fully understand them.  So I decided I’d make a (small) multi-part series out of this and show some simple sketches of implementations of continuation using the Parallel Pattern Library, std::future and std::async and the Agents Library.  

This is part 1 which will cover a simple implementation of ‘run_when’ and ‘wait_for_all’ using the PPL. 

First a simple dependency / continuation

Here’s an incredibly simple dependency, task 2 depends on task 1.

void ContinueableTask(int in){
       printf("building project: %d\n", in);
};

void SimpleContinuation(){
       ContinueableTask(1);

   //task 2 depends on task 1
   ContinueableTask(2);
}

 

There’s not a lot of concurrency here with only two tasks, yet we can still make this execution ‘parallel’ with a task_group as follows:
void SimpleContinuationTasks(){
   //task1   task_group task1;
   task1.run([](){ContinueableTask(1);});

   //task2 depends on task 1
    task_group task2;
    task2.run([&](){
      task1.wait();
      ContinueableTask(2);
    });
   //wait for task 2
   task2.wait();
}

 

Making it look a bit cleaner

 
The code here isn’t too bad, but is a little awkward.  It would sure be nice to write something like this instead:
—-
void SimpleContinuation(){
    auto task1 = run_task([](){ContinueableTask(1);});
    //task 2 depends on task 1
    auto task2 = run_when(task1, [](){ContinueableTask(2);});
    wait_for_all(task1, task2);
}

 

Implementing this is straightforward in the PPL (in part 2 you’ll see that it’s also straightforward with std::future and std::async and that run_task looks *a lot* like std::async). 

typedef shared_ptr<task_group> shared_task_ptr;

template<typename Func>inline shared_task_ptr run_task(Func& fn){
    shared_ptr<task_group> tasks = make_shared<task_group>();
    tasks->run(fn);
    return tasks;
}

 

I’m using a shared_ptr here because task_group is non-copyable and this allows me to return it by value without having to do anything unnatural in the interface (like create one then pass it in by reference).

run_when is also pretty straightforward, here is a template implementation:

template<typename Func>
inline shared_task_ptr run_when(shared_task_ptr tasks, Func fn){
    tasks->wait();
    return run_task(fn);
}
 
and finally wait_for_all:
 
inline void wait_for_all(shared_task_ptr t1, shared_task_ptr t2){    t1->wait();    t2->wait();}

Expanding this is straightforward

 
It should be pretty easy to see that you can expand run_when and wait_for_all to include overloads for more parameters and that writing an example with more complex dependencies is straightforward:
void MoreComplexContinuations(){    auto p1 = run_task([](){ContinueableTask(1);});    auto p2 = run_task([](){ContinueableTask(2);});    auto p3 = run_task([](){ContinueableTask(3);});    auto p4 = run_when(p1, [](){ContinueableTask(4);});    auto p5 = run_when(p2, p3, [](){ContinueableTask(5);});    auto p6 = run_when(p3, p4, [](){ContinueableTask(6);});    wait_for_all(p1, p2, p3, p4, p5, p6);}

 

Next Time part2: std::async and std::future

In the next few days, I’ll post part 2 and show how easy to implement this pattern in with std::async and std::future.

Advertisements

At the PDC I showed a demo of the new parallel stacks window in Visual Studio 2010 and how task view and method view can make navigating a multi-threaded program in the debugger more intuitive when compared to the existing threads and stacks windows (thought these got better too).  I thought I’d take a moment and share the key points and the code I used for this.

The code is a simple deadlock of 5 tasks waiting on an event that is never set.  I don’t expect that you’ll be able to read the text on many of these pictures, but I believe the point will still clear (and you can always try this in VS 2010 yourself.

Call Stack & Thread Windows

This is the way that we’ve been debugging multi-threaded programs via the Call Stack & Thread windows.  If you’ve been here before it can be difficult if you have more than one thread to orient on and find your location. It’s often a matter of poking through each thread in the Threads window and checking the Call Stack window.

This is one of the reasons why in VS2010 you can now expand the Call Stack inline and search in the Threads window.  The inline expansion looking on and search actually can help significantly if you know what you’re looking for in a particular deep stack, but for this particular issue it’s not the most efficient way of seeing what’s going on…

image

The Parallel Stacks Window

The Parallel Stacks Window shows you all the stacks for all the threads at once. To help with the immense amounts of screen real estate required for this, it trims the fully qualified callstack name (you can still see it in a tooltip) and it also groups the related stacks so you can start seeing any relationships and in this case hopefully seeing why this app is deadlocked.

image 

The Parallel Stacks Window: task view

Since this is an application build on tasks, we can set the filter to show only the running tasks and now the picture starts getting more clear (and the text is almost readable).   This is because the tasks view filters out the runtime portions of the callstack and only shows the stack frames that are part of the task.  You can see that we have 4 tasks running and it looks like they are waiting in event::wait.

image

The Parallel Stacks Window: method view

Switching the focus to event::wait and toggling ‘method view’ which will only show callers and callees in a particular method makes this incredibly obvious.

image

So here’s the code:

This was implemented with template recursion (thanks go to twoscomplement who’s colossal cave inspired tweet: “You are in a maze of twisty template overloads, all alike” inspired the particular code example.

#include <memory>
#include <ppl.h>
#include <iostream>
using namespace ::std;
using namespace ::Concurrency;
event e;

template <int N>
void youre_in_a_maze()
{
    printf(“%dth task\n”,N);
    e.wait();
}

template <int N>
void of_template_overloads()
{
    youre_in_a_maze<N>();
};

template <int N>
void all_alike()
{
    of_template_overloads<N>();
};

template <int N>
struct colossal_task
{
    shared_ptr<colossal_task<N – 1>> task;
    task_group& m_tasks;
    colossal_task(task_group& tasks):m_tasks(tasks),task(new colossal_task<N – 1> (tasks))
    {
        m_tasks.run(*task);
    }
    void operator ()() const
    {
        all_alike<N>();
    }
};

template <>
struct colossal_task <0>
{
    task_group& m_tasks;
    colossal_task(task_group& tasks):m_tasks(tasks)
    {
    }
    void operator ()() const
    {
        printf(“0th task\n”);
    }
};

int main()
{
    task_group tasks;
    colossal_task<5> t(tasks);
    tasks.wait();
    return 0;
}

Recently I had an opportunity to do some refactoring in a raytracer that we use for demos at work.  I don’t how you feel about refactoring, but I personally find it an immensely calming and rewarding experience because it allows me to make improvements to quality and design in a very incremental and manner, even to old or large code bases.

I also really enjoy generic programming and I wanted to share this refactoring because it is a simple example of how using generic programming to rely on a compile time interface can reduce overall code. It also doesn’t rely on techniques that can make generic programming inaccessible like compile time polymorphism, typelists or the curiously recurring template pattern.

It’s borderline too simple to write down, but I’m doing it anyways so here it is…

Two similar classes a float3 and a color3

Before the refactoring, the raytracer had 2 classes that were used all over the place in the code, float3 and color.  Both of these are simple structs that had 3 data members.  Their definitions are incredibly similar and looked like this:

struct color3{
   float r;
   float g;
   float b;
   color3(){}
   color3(float r_, float g_, float b_):r(r_),g(g_),b(b_){}
};
struct float3{
   float x;
   float y;
   float z;
   float3(){}
   float3(float x_, float y_, float z_):x(x_),y(y_),z(z_){}
};

Sadly, 2 sets of methods to operate on the classes

Unfortunately, both of these classes also initially had strongly typed supporting functions like ‘add’ which were incredibly similar, only the float3 version used x,y and z while the color version used r, g, b:

color3 add(color3& c1, color3& c2)
{
   return color3(c1.r + c2.r, c1.g + c2.g, c1.b + c2.b);
}
float3 add(float3& v1, float3& v2)
{
   return color3(v1.x + v2.x, v1.y + v2.y, v1.z + v2.z);
}

Unfortunately while incredibly straightforward and seemingly simple, this isn’t very desirable. There were 4 or 5 functions like this that were essentially identical with the only difference being the types and member variable names.

These methods can be made generic

What I wanted was a simple way to write the add function once that added no additional size or instruction overheads to the structs, since both of these structs are used frequently inside a very tight loop, even small changes will have a noticable performance impact.  For example one of the refactorings I did removed a single call to the copy constructor and that increased the performance by about 50ms per frame ~12.5% (seriously) the scene I was looking at went from ~4 fps to ~5 fps on my quad core.

Here’s what I decided to do, I’ll explain in a moment, first I created a template function ‘add’:

template <class vec>
inline vec add(vec& v1, vec& v2)
{
    return vec(v1.first()  + v2.first(),
               v1.second() + v2.second(),
               v1.third()  + v2.third());
}

 
This template function works with two references to type ‘vec’; and relys on the member functions first, second and third to be present in vec because it uses them.
Implementing each of these methods is straightforward:

// implementation of first for float3
inline float first()
{
    return x;
}

// implementation of first for color3
inline float first()
{
    return r;
}

 
This may seem inefficient, but it turns out not to be. 
 
Because I’m relying on a compile time interface rather than a virtual function there is no additional overhead.  In this method the return value is a simple built in type, but even for larger classes when the value is constructed as part of the return statement a compiler may optimize away the copy construction, and most do.  This is known as the return value optimization and the C++ spec explicitly allows for this (section 12.8).
 

Did it make a difference?

As far as code reduction goes, I removed 7 member functions from one class and turned the other 7 into free functions and I added a total of six member functions for the first, second and third.   So this was a net minor change from a lines of code perspective, but there are no longer 2 sets of functions with similar names and functionality.
 
I also made it possible to provide mock classes to those free functions for testing, before they were strongly typed and this just wasn’t possible.  Now if I want I can test ‘add’ without relying on float3 or color3.
 
Additionally, if new functions are needed that work on these types, I no longer have to add them to both classes. I can simply add the function that relies on first, second and third and this can instantly be used with both types.
 
Finally, if I’m only looking at reading properties there is now no reason to expose publicly the member variables and this is another massive win because I can protect against potentially non-thread safe data writes.
 
-Rick

follow me on twitter