您使用哪些并行编程API?

时间:2021-08-31 13:49:10

Trying to get a grip on how people are actually writing parallel code currently, considering the immense importance of multicore and multiprocessing hardware these days. To me, it looks like the dominant paradigm is pthreads (POSIX threads), which is native on Linux and available on Windows. HPC people tend to use OpenMP or MPI, but there are not many of these here on * it seems. Or do you rely on Java threading, Windows threading APIs, etc. rather than the portable standards? What is the recommended way, in your opinion, to do parallel programming?

考虑到当今多核和多处理硬件的巨大重要性,试图掌握当前人们如何实际编写并行代码。对我来说,看起来主流范式是pthreads(POSIX线程),它在Linux上是原生的,可在Windows上使用。 HPC人员倾向于使用OpenMP或MPI,但似乎*上的这些并不多。或者你依靠Java线程,Windows线程API等而不是便携式标准?在您看来,建议的并行编程方式是什么?

Or are you using more exotic things like Erlang, CUDA, RapidMind, CodePlay, Oz, or even dear old Occam?

或者你正在使用更多异国情调的东西,如Erlang,CUDA,RapidMind,CodePlay,Oz,甚至是亲爱的老Occam?

Clarification: I am looking for solutions that are quite portable and applicable to platforms such as Linux, various unixes, on various host architectures. Windows is a rare case that is nice to support. So C# and .net are really too narrow here, the CLR is a cool piece of technology but could they PLEASE release it for Linux host so that it would be as prevalent as say the JVM, Python, Erlang, or any other portable language.

澄清:我正在寻找非常便携的解决方案,适用于各种主机架构上的Linux,各种unix等平台。 Windows是一种罕见的案例,非常适合支持。所以C#和.net在这里实在太窄了,CLR是一个很酷的技术,但是他们可以为Linux主机发布它,这样它就像JVM,Python,Erlang或任何其他可移植语言一样流行。

C++ or JVM-based: probably C++, since JVMs tend to hide performance.

基于C ++或JVM:可能是C ++,因为JVM往往会隐藏性能。

MPI: I would agree that even the HPC people see it as a hard to use tool -- but for running on 128000 processors, it is the only scalable solution for the problems where map/reduce do not apply. Message-passing has great elegance, though, as it is the only programming style that seems to scale really well to local memory/AMP, shared memory/SMP, distributed run-time environments.

MPI:我同意甚至HPC人员都认为它是一种难以使用的工具 - 但是对于在128000处理器上运行,它是解决map / reduce不适用的问题的唯一可扩展解决方案。但是,消息传递非常优雅,因为它是唯一可以很好地扩展到本地内存/ AMP,共享内存/ SMP,分布式运行时环境的编程风格。

An interesting new contender is the MCAPI. but I do not think anyone has had time to have any practical experience with that yet.

一个有趣的新竞争者是MCAPI。但我认为没有人有时间对此有任何实际经验。

So overall, the situation seems to be that there are a lot of interesting Microsoft projects that I did not know about, and that Windows API or pthreads are the most common implementations in practice.

总的来说,情况似乎是有许多我不了解的有趣的Microsoft项目,而Windows API或pthreads是实践中最常见的实现。

20 个解决方案

#1


10  

MPI isn't as hard as most make it seem. Nowadays I think a multi-paradigm approach is best suited for parallel and distributed applications. Use MPI for your node to node communication and synchronization and either OpenMP or PThreads for your more granular parallelization. Think MPI for each machine, and OpenMP or PThreads for each core. This would seem to scale a little bit better than spawning a new MPI Proc for each core for the near future.

MPI并不像大多数人看起来那么难。现在我认为多范式方法最适合并行和分布式应用程序。使用MPI进行节点到节点的通信和同步,使用OpenMP或PThreads进行更精细的并行化。想想每台机器的MPI,以及每个核心的OpenMP或PThreads。这似乎比在不久的将来为每个核心产生新的MPI Proc更好一些。

Perhaps for dual or quad core right now, spawning a proc for each core on a machine won't have that much overhead, but as we approach more and more cores per machine where the cache and on die memory aren't scaling as much, it would be more appropriate to use a shared memory model.

也许现在对于双核或四核,为机器上的每个核心生成一个proc将不会有那么多开销,但随着我们每台机器接近越来越多的核心,其中缓存和内存不会扩展那么多,使用共享内存模型会更合适。

#2


6  

I'd recommend OpenMP. Microsoft have put it into the Visual C++ 2005 compiler so its well supported, and you don't need to do anything other than compile with the /omp directive.

我推荐OpenMP。 Microsoft已将其放入Visual C ++ 2005编译器中,因此它得到了很好的支持,除了使用/ omp指令进行编译之外,您无需执行任何操作。

Its simple to use, though obviously it doesn't do everything for you, but then nothing does. I use it for running parallel for loops generally without any hassle, for more complex things I tend to roll my own (eg I have code from ages ago I cut, paste and modify).

它的使用简单,但显然它并不能为你做任何事情,但事实并非如此。我使用它来运行并行循环通常没有任何麻烦,对于更复杂的事情我倾向于自己滚动(例如我有很久以前的代码我剪切,粘贴和修改)。

You could try Cilk++ which looks good, and has an e-book "How to Survive the Multicore Software Revolution".

您可以尝试看起来不错的Cilk ++,并且有一本电子书“如何生存多核软件革命”。

Both these kinds of system try to parallelize serial code - ie take a for loop a run it on all the cores simultaneously in as easy a way possible. They don't tend to be general-purpose thread libraries. (eg a research paper(pdf) described performance of different types of thread pools implemented in openMP and suggested 2 new operations should be added to it - yield and sleep. I think they're missing the point of OpenMP a little there)

这两种系统都尝试并行化串行代码 - 即采用for循环,以尽可能简单的方式同时在所有内核上运行它。它们通常不是通用线程库。 (例如,一篇研究论文(pdf)描述了在openMP中实现的不同类型的线程池的性能,并建议添加2个新的操作 - yield和sleep。我认为他们在那里忽略了一点OpenMP

As you mentioned OpenMP, I assume you're talking about native c++, not C# or .NET.

正如您提到的OpenMP,我假设您在谈论本机c ++,而不是C#或.NET。

Also, if the HPC people (who I assume are experts in this kind of domain) seem to be using OpenMP or MPI, then this is what you should be using, not what the readership of SO is!

此外,如果HPC人员(我认为他们是这类领域的专家)似乎正在使用OpenMP或MPI,那么这就是你应该使用的,而不是SO的读者群!

#3


4  

We've started looking at parallel extensions from Microsoft - its not in release yet, but is certainly showing potential.

我们已经开始关注微软的并行扩展 - 它尚未发布,但肯定显示出潜力。

#4


3  

I've used ACE to allow developers to use POSIX (or windows) style threading on any platform.

我使用ACE允许开发人员在任何平台上使用POSIX(或windows)样式线程。

#5


2  

Parallel FX Library (PFX) - a managed concurrency library being developed by a collaboration between Microsoft Research and the CLR team at Microsoft for inclusion with a future revision of the .NET Framework. It is composed of two parts: Parallel LINQ (PLINQ) and Task Parallel Library (TPL). It also consists of a set of Coordination Data Structures (CDS) - a set of data structures used to synchronize and co-ordinate the execution of concurrent tasks. The library was released as a CTP on November 29, 2007 and refreshed again in December 2007 and June 2008.

Parallel FX Library(PFX) - 一个托管并发库,由Microsoft Research和Microsoft的CLR团队合作开发,包含在.NET Framework的未来版本中。它由两部分组成:并行LINQ(PLINQ)和任务并行库(TPL)。它还包含一组协调数据结构(CDS) - 一组用于同步和协调并发任务执行的数据结构。该图书馆于2007年11月29日作为CTP发布,并于2007年12月和2008年6月再次刷新。

Not very much experience though...

虽然经验不是很多......

#6


2  

Please be aware that the answers here are not going to be a statistically representative answer to "actually using". Already I see a number of "X is nice" answers.

请注意,此处的答案不会成为“实际使用”的统计代表性答案。我已经看到了许多“X很好”的答案。

I've personally used Windows Threads on many a project. The other API I have seen in wide use is pthreads. On the HPC front, MPI is still taken seriously by the people using it <subjective> I don't - it combines all the elegance of C++ with the performance of Javascript. It survives because there is no decent alternative. It will lose to tighly coupled NUMA machines on one side and Google-style map-reduce on the other. </subjective>

我个人在很多项目中都使用过Windows Threads。我见过的其他API广泛使用的是pthreads。在HPC方面,MPI仍然受到使用它的人的认真态度 <主观> 我没有 - 它结合了C ++的所有优雅和Javascript的性能。它幸存下来,因为没有合适的选择。它将失去一方的高度耦合的NUMA机器和另一方面的Google风格的map-reduce。

#7


2  

More Data Parallel Haskell would be nice, but even without it, GHC>6.6 has some impressive ability to parallelize algorithms easily, via Control.Parallel.Strategies.

更多数据并行Haskell会很好,但即使没有它,GHC> 6.6也能通过Control.Parallel.Strategies轻松地并行化算法。

#8


1  

For .Net I have used with great success RetLang. For the JVM, Scale is great.

对于.Net,我使用了RetLang非常成功。对于JVM,Scale很棒。

#9


1  

How about Open CL?

Open CL怎么样?

#10


1  

Very much depends on your environment.

在很大程度上取决于您的环境。

For palin old C nothing beats POSIX.

对于老C来说,没有什么能胜过POSIX。

For C++ there is a very good threading library from BOOST.ORG for free.

对于C ++,BOOST.ORG有一个非常好的线程库是免费的。

Java just use native java threading.

Java只使用本机java线程。

You may also look at other ways to acheive parallelism other than threading, like dividing your application into client and server processes and using asynchronous messaging to communicate. Done properly this can scale up to thousands of users on dozens of servers.

您还可以查看其他方法来实现除线程之外的并行性,例如将应用程序划分为客户端和服务器进程以及使用异步消息传递进行通信。如果做得恰当,这可以在数十台服务器上扩展到数千名用户。

Its also worth remebdering that if you are using Windows MFC, Gnome or Qt windowing environment you are automatically in a multithreaded environment. If you are using Apache ISS or J2EE your application is already running inside a multi-threaded multi-process environment.

还值得重新考虑的是,如果您使用的是Windows MFC,Gnome或Qt窗口环境,那么您将自动处于多线程环境中。如果您使用的是Apache ISS或J2EE,则您的应用程序已在多线程多进程环境中运行。

#11


1  

Most of the concurrent programs I have written were in Ada, which has full support for parallelism natively in the language. One of the nice benifits of this is that your parallel code is portable to any system with an Ada compiler. No special library required.

我编写的大多数并发程序都在Ada中,它完全支持该语言本身的并行性。其中一个很好的好处是你的并行代码可以移植到任何带有Ada编译器的系统。不需要特殊的图书馆。

#12


0  

+1 for PLINQ

PLINQ为+1

Win32 Threads, Threadpool and Fibers, Sync Objects

Win32线程,线程池和光纤,同步对象

#13


0  

I maintain a concurrency link blog that has covered a bunch of these over time (and will continue to do so):

我维护一个并发链接博客,随着时间的推移已经涵盖了很多这些博客(并将继续这样做):

http://concurrency.tumblr.com

#14


0  

I only know Java so far, multi threading support there worked well for me..

到目前为止我只知道Java,多线程支持对我来说效果很好..

#15


0  

I used OpenMP alot mainly due to its simplicity, portability and flexibility. It's supports mulitple languages even almighty C++/Cli :)

我使用OpenMP很多,主要是因为它的简单性,便携性和灵活性。它支持多种语言,甚至是全能的C ++ / Cli :)

#16


0  

I use MPI and like it very much. It does force you to think about the memory hierarchy, but in my experience, thinking about such things is important for high performance anyway. In many cases, MPI can largely be hidden behind domain-specific parallel objects (e.g. PETSc for solving linear and nonlinear equations).

我使用MPI并非常喜欢它。它确实会强迫你考虑内存层次结构,但根据我的经验,考虑这些事情对于高性能无论如何都很重要。在许多情况下,MPI可以主要隐藏在特定于域的并行对象(例如,用于求解线性和非线性方程的PETSc)之后。

#17


0  

pycuda... nothing like 25000 active threads :) [warp scheduled with scoreboarding]. cuda 2 has stream support, so I'm not sure what streamit would bring. CUDA Matlab extensions look neat, as do PLUTO and the coming PetaBricks from MIT.

pycuda ...没有像25000活动线程:) [warp预定与记分板]。 cuda 2有流支持,所以我不确定会带来什么样的流。 CUDA Matlab扩展看起来很整洁,就像PLUTO和麻省理工学院即将推出的PetaBricks一样。

as far as others, python's threading is lacking; MPI, etc. are complicated, and I don't have a cluster, but I suppose they achieve what they are built for; I stopped c# programming before I got to thread apartments (probably a good thing).

至于其他人,缺乏python的线程; MPI等很复杂,我没有集群,但我认为它们实现了它们的构建;在我开始公寓之前我停止了c#编程(可能是一件好事)。

#18


0  

It's not parallel per se and does not have a distributed model, but you can write highly concurrent code on the JVM using Clojure. Subsequently you get the plethora of Java libraries available to you. You would have to implement your own parallel algo's on top of clojure, but that should be relatively easy. I do repeat it does not yet have a distributed model.

它本身并不是并行的,并且没有分布式模型,但您可以使用Clojure在JVM上编写高度并发的代码。随后,您将获得大量可用的Java库。你必须在clojure之上实现你自己的并行算法,但这应该相对容易。我重复它还没有分布式模型。

#19


0  

gthreads from the glibc library http://library.gnome.org/devel/glib/stable/glib-Threads.html compile down to pthreads, so you don't loose any performance. They also give you very powerful thread pools, and message queues between threads. I have used them successfully several times, and been very happy with the available features.

来自glibc库的gthreads http://library.gnome.org/devel/glib/stable/glib-Threads.html编译成pthreads,所以你不会失去任何性能。它们还为线程池提供了非常强大的线程池和消息队列。我已成功使用过几次,并对可用功能非常满意。

#20


0  

I use open cl.I think its pretty easier to use as compared to mpi.I have also used mpi before as a requirement for my parallel and distributed computing course but I think you have to do too much manual labor.I am going to start work in CUDA in a few days.CUDA is very similar to open cl but the issues is CUDA is only for nvidia products.

我使用open cl。我认为与mpi相比它更容易使用。我之前也使用mpi作为我的并行和分布式计算课程的要求,但我认为你必须做太多的手工劳动。我将开始在几天内在CUDA工作.CUDA与open cl非常相似,但问题是CUDA仅适用于nvidia产品。

#1


10  

MPI isn't as hard as most make it seem. Nowadays I think a multi-paradigm approach is best suited for parallel and distributed applications. Use MPI for your node to node communication and synchronization and either OpenMP or PThreads for your more granular parallelization. Think MPI for each machine, and OpenMP or PThreads for each core. This would seem to scale a little bit better than spawning a new MPI Proc for each core for the near future.

MPI并不像大多数人看起来那么难。现在我认为多范式方法最适合并行和分布式应用程序。使用MPI进行节点到节点的通信和同步,使用OpenMP或PThreads进行更精细的并行化。想想每台机器的MPI,以及每个核心的OpenMP或PThreads。这似乎比在不久的将来为每个核心产生新的MPI Proc更好一些。

Perhaps for dual or quad core right now, spawning a proc for each core on a machine won't have that much overhead, but as we approach more and more cores per machine where the cache and on die memory aren't scaling as much, it would be more appropriate to use a shared memory model.

也许现在对于双核或四核,为机器上的每个核心生成一个proc将不会有那么多开销,但随着我们每台机器接近越来越多的核心,其中缓存和内存不会扩展那么多,使用共享内存模型会更合适。

#2


6  

I'd recommend OpenMP. Microsoft have put it into the Visual C++ 2005 compiler so its well supported, and you don't need to do anything other than compile with the /omp directive.

我推荐OpenMP。 Microsoft已将其放入Visual C ++ 2005编译器中,因此它得到了很好的支持,除了使用/ omp指令进行编译之外,您无需执行任何操作。

Its simple to use, though obviously it doesn't do everything for you, but then nothing does. I use it for running parallel for loops generally without any hassle, for more complex things I tend to roll my own (eg I have code from ages ago I cut, paste and modify).

它的使用简单,但显然它并不能为你做任何事情,但事实并非如此。我使用它来运行并行循环通常没有任何麻烦,对于更复杂的事情我倾向于自己滚动(例如我有很久以前的代码我剪切,粘贴和修改)。

You could try Cilk++ which looks good, and has an e-book "How to Survive the Multicore Software Revolution".

您可以尝试看起来不错的Cilk ++,并且有一本电子书“如何生存多核软件革命”。

Both these kinds of system try to parallelize serial code - ie take a for loop a run it on all the cores simultaneously in as easy a way possible. They don't tend to be general-purpose thread libraries. (eg a research paper(pdf) described performance of different types of thread pools implemented in openMP and suggested 2 new operations should be added to it - yield and sleep. I think they're missing the point of OpenMP a little there)

这两种系统都尝试并行化串行代码 - 即采用for循环,以尽可能简单的方式同时在所有内核上运行它。它们通常不是通用线程库。 (例如,一篇研究论文(pdf)描述了在openMP中实现的不同类型的线程池的性能,并建议添加2个新的操作 - yield和sleep。我认为他们在那里忽略了一点OpenMP

As you mentioned OpenMP, I assume you're talking about native c++, not C# or .NET.

正如您提到的OpenMP,我假设您在谈论本机c ++,而不是C#或.NET。

Also, if the HPC people (who I assume are experts in this kind of domain) seem to be using OpenMP or MPI, then this is what you should be using, not what the readership of SO is!

此外,如果HPC人员(我认为他们是这类领域的专家)似乎正在使用OpenMP或MPI,那么这就是你应该使用的,而不是SO的读者群!

#3


4  

We've started looking at parallel extensions from Microsoft - its not in release yet, but is certainly showing potential.

我们已经开始关注微软的并行扩展 - 它尚未发布,但肯定显示出潜力。

#4


3  

I've used ACE to allow developers to use POSIX (or windows) style threading on any platform.

我使用ACE允许开发人员在任何平台上使用POSIX(或windows)样式线程。

#5


2  

Parallel FX Library (PFX) - a managed concurrency library being developed by a collaboration between Microsoft Research and the CLR team at Microsoft for inclusion with a future revision of the .NET Framework. It is composed of two parts: Parallel LINQ (PLINQ) and Task Parallel Library (TPL). It also consists of a set of Coordination Data Structures (CDS) - a set of data structures used to synchronize and co-ordinate the execution of concurrent tasks. The library was released as a CTP on November 29, 2007 and refreshed again in December 2007 and June 2008.

Parallel FX Library(PFX) - 一个托管并发库,由Microsoft Research和Microsoft的CLR团队合作开发,包含在.NET Framework的未来版本中。它由两部分组成:并行LINQ(PLINQ)和任务并行库(TPL)。它还包含一组协调数据结构(CDS) - 一组用于同步和协调并发任务执行的数据结构。该图书馆于2007年11月29日作为CTP发布,并于2007年12月和2008年6月再次刷新。

Not very much experience though...

虽然经验不是很多......

#6


2  

Please be aware that the answers here are not going to be a statistically representative answer to "actually using". Already I see a number of "X is nice" answers.

请注意,此处的答案不会成为“实际使用”的统计代表性答案。我已经看到了许多“X很好”的答案。

I've personally used Windows Threads on many a project. The other API I have seen in wide use is pthreads. On the HPC front, MPI is still taken seriously by the people using it <subjective> I don't - it combines all the elegance of C++ with the performance of Javascript. It survives because there is no decent alternative. It will lose to tighly coupled NUMA machines on one side and Google-style map-reduce on the other. </subjective>

我个人在很多项目中都使用过Windows Threads。我见过的其他API广泛使用的是pthreads。在HPC方面,MPI仍然受到使用它的人的认真态度 <主观> 我没有 - 它结合了C ++的所有优雅和Javascript的性能。它幸存下来,因为没有合适的选择。它将失去一方的高度耦合的NUMA机器和另一方面的Google风格的map-reduce。

#7


2  

More Data Parallel Haskell would be nice, but even without it, GHC>6.6 has some impressive ability to parallelize algorithms easily, via Control.Parallel.Strategies.

更多数据并行Haskell会很好,但即使没有它,GHC> 6.6也能通过Control.Parallel.Strategies轻松地并行化算法。

#8


1  

For .Net I have used with great success RetLang. For the JVM, Scale is great.

对于.Net,我使用了RetLang非常成功。对于JVM,Scale很棒。

#9


1  

How about Open CL?

Open CL怎么样?

#10


1  

Very much depends on your environment.

在很大程度上取决于您的环境。

For palin old C nothing beats POSIX.

对于老C来说,没有什么能胜过POSIX。

For C++ there is a very good threading library from BOOST.ORG for free.

对于C ++,BOOST.ORG有一个非常好的线程库是免费的。

Java just use native java threading.

Java只使用本机java线程。

You may also look at other ways to acheive parallelism other than threading, like dividing your application into client and server processes and using asynchronous messaging to communicate. Done properly this can scale up to thousands of users on dozens of servers.

您还可以查看其他方法来实现除线程之外的并行性,例如将应用程序划分为客户端和服务器进程以及使用异步消息传递进行通信。如果做得恰当,这可以在数十台服务器上扩展到数千名用户。

Its also worth remebdering that if you are using Windows MFC, Gnome or Qt windowing environment you are automatically in a multithreaded environment. If you are using Apache ISS or J2EE your application is already running inside a multi-threaded multi-process environment.

还值得重新考虑的是,如果您使用的是Windows MFC,Gnome或Qt窗口环境,那么您将自动处于多线程环境中。如果您使用的是Apache ISS或J2EE,则您的应用程序已在多线程多进程环境中运行。

#11


1  

Most of the concurrent programs I have written were in Ada, which has full support for parallelism natively in the language. One of the nice benifits of this is that your parallel code is portable to any system with an Ada compiler. No special library required.

我编写的大多数并发程序都在Ada中,它完全支持该语言本身的并行性。其中一个很好的好处是你的并行代码可以移植到任何带有Ada编译器的系统。不需要特殊的图书馆。

#12


0  

+1 for PLINQ

PLINQ为+1

Win32 Threads, Threadpool and Fibers, Sync Objects

Win32线程,线程池和光纤,同步对象

#13


0  

I maintain a concurrency link blog that has covered a bunch of these over time (and will continue to do so):

我维护一个并发链接博客,随着时间的推移已经涵盖了很多这些博客(并将继续这样做):

http://concurrency.tumblr.com

#14


0  

I only know Java so far, multi threading support there worked well for me..

到目前为止我只知道Java,多线程支持对我来说效果很好..

#15


0  

I used OpenMP alot mainly due to its simplicity, portability and flexibility. It's supports mulitple languages even almighty C++/Cli :)

我使用OpenMP很多,主要是因为它的简单性,便携性和灵活性。它支持多种语言,甚至是全能的C ++ / Cli :)

#16


0  

I use MPI and like it very much. It does force you to think about the memory hierarchy, but in my experience, thinking about such things is important for high performance anyway. In many cases, MPI can largely be hidden behind domain-specific parallel objects (e.g. PETSc for solving linear and nonlinear equations).

我使用MPI并非常喜欢它。它确实会强迫你考虑内存层次结构,但根据我的经验,考虑这些事情对于高性能无论如何都很重要。在许多情况下,MPI可以主要隐藏在特定于域的并行对象(例如,用于求解线性和非线性方程的PETSc)之后。

#17


0  

pycuda... nothing like 25000 active threads :) [warp scheduled with scoreboarding]. cuda 2 has stream support, so I'm not sure what streamit would bring. CUDA Matlab extensions look neat, as do PLUTO and the coming PetaBricks from MIT.

pycuda ...没有像25000活动线程:) [warp预定与记分板]。 cuda 2有流支持,所以我不确定会带来什么样的流。 CUDA Matlab扩展看起来很整洁,就像PLUTO和麻省理工学院即将推出的PetaBricks一样。

as far as others, python's threading is lacking; MPI, etc. are complicated, and I don't have a cluster, but I suppose they achieve what they are built for; I stopped c# programming before I got to thread apartments (probably a good thing).

至于其他人,缺乏python的线程; MPI等很复杂,我没有集群,但我认为它们实现了它们的构建;在我开始公寓之前我停止了c#编程(可能是一件好事)。

#18


0  

It's not parallel per se and does not have a distributed model, but you can write highly concurrent code on the JVM using Clojure. Subsequently you get the plethora of Java libraries available to you. You would have to implement your own parallel algo's on top of clojure, but that should be relatively easy. I do repeat it does not yet have a distributed model.

它本身并不是并行的,并且没有分布式模型,但您可以使用Clojure在JVM上编写高度并发的代码。随后,您将获得大量可用的Java库。你必须在clojure之上实现你自己的并行算法,但这应该相对容易。我重复它还没有分布式模型。

#19


0  

gthreads from the glibc library http://library.gnome.org/devel/glib/stable/glib-Threads.html compile down to pthreads, so you don't loose any performance. They also give you very powerful thread pools, and message queues between threads. I have used them successfully several times, and been very happy with the available features.

来自glibc库的gthreads http://library.gnome.org/devel/glib/stable/glib-Threads.html编译成pthreads,所以你不会失去任何性能。它们还为线程池提供了非常强大的线程池和消息队列。我已成功使用过几次,并对可用功能非常满意。

#20


0  

I use open cl.I think its pretty easier to use as compared to mpi.I have also used mpi before as a requirement for my parallel and distributed computing course but I think you have to do too much manual labor.I am going to start work in CUDA in a few days.CUDA is very similar to open cl but the issues is CUDA is only for nvidia products.

我使用open cl。我认为与mpi相比它更容易使用。我之前也使用mpi作为我的并行和分布式计算课程的要求,但我认为你必须做太多的手工劳动。我将开始在几天内在CUDA工作.CUDA与open cl非常相似,但问题是CUDA仅适用于nvidia产品。