拥有支持InfiniBand的windows Azure A8节点,如何从一个节点发送N个字节,从另一个节点接收N个字节?

时间:2021-07-16 22:47:32

I like InfiniBand promise of 40Gbit/s network. My needs do not map onto the MPI model with one core node + slaves, and if possible I would prefer not to use MPI at all. I need simple connect/send/receive/close (or its async versions) API. Yet reading MS Azure docs nor in Microsoft HPC Pack docs I cant find any API for C/C++ or .Net that would allow to use InfiniBand as transport for my application. So my question is simple how to use InfiniBand to connect to other nodes and send data packets to them and receive on other end? (Alike some Socket API or anything like that)

我喜欢InfiniBand对40Gbit/s网络的承诺。我的需求没有映射到带有一个核心节点+从节点的MPI模型,如果可能的话,我宁愿不使用MPI。我需要简单的连接/发送/接收/关闭(或其异步版本)API。然而,阅读Azure文档和微软的HPC Pack文档,我找不到任何可以使用InfiniBand作为我的应用程序传输的C/ c++或。net的API。所以我的问题很简单,如何使用InfiniBand连接到其他节点,发送数据包给它们,并在另一端接收?(类似套接字API)

ND-SPI on Azure or DAPL-ND on Azure connect/send/receive/close tutorial is what I am looking for.

Azure上的ND-SPI或者Azure connect/send/receive/close教程上的dap - nd就是我要找的。

2 个解决方案

#1


1  

I agree with Hristo's comment that it'll be MUCH easier to use a higher level API's that MPI provide, rather than a "native" IB library.
And just to clarify, MPI does not impose Master-Slave. Once all the processes are up and have a communicator, you have all the flexibility in the world. Anybody can send data to anybody. And with MPI 2.0 you have one-sided communication, where one worker can essentially reach into another's memory.

我同意Hristo的评论,即使用MPI提供的更高级别的API要比使用“本机”IB库容易得多。澄清一下,MPI没有强制执行主从。一旦所有的过程都完成并且有了一个沟通者,你就拥有了世界上所有的灵活性。任何人都可以向任何人发送数据。在MPI 2.0中,你有一种片面的交流,一个工作人员可以接触到另一个人的记忆。

#2


0  

...I cant find any API for C/C++ or .Net that would allow to use InfiniBand as transport for my application. So my question is simple how to use InfiniBand to connect to other nodes and send data packets to them and receive on other end?

…我找不到任何用于C/ c++或。net的API,可以使用InfiniBand来传输我的应用。所以我的问题很简单,如何使用InfiniBand连接到其他节点,发送数据包给它们,并在另一端接收?

The C API for direct access to InfiniBand is known as 'verbs'.

直接访问InfiniBand的C API被称为“verb”。

Among the numerous resources online to introduce this topic, I found http://blog.zhaw.ch/icclab/infiniband-an-introduction-simple-ib-verbs-program-with-rdma-write/ to be relatively approachable.

在众多的网上介绍这个话题的资源中,我找到了http://blog.zhaw。ch/icclab/infiniband- introduction-simple- ibm - words -program-with-rdma-write/相对容易接近。

The ultimate authority on InfiniBand software is OpenFabrics. The OFED website links docs and downloads.

InfiniBand软件的终极权威是openfabric。OFED网站链接文档和下载。

I noticed under "OFS for Windows" that there is a link to Overview of Network Direct Kernel Provider Interface (NDKPI), which might meet your needs, but I have never used it because I do not use Windows.

我在“OFS for Windows”中注意到,有一个链接可以查看网络直接内核提供者接口(NDKPI),它可能满足您的需要,但我从未使用过它,因为我不使用Windows。

#1


1  

I agree with Hristo's comment that it'll be MUCH easier to use a higher level API's that MPI provide, rather than a "native" IB library.
And just to clarify, MPI does not impose Master-Slave. Once all the processes are up and have a communicator, you have all the flexibility in the world. Anybody can send data to anybody. And with MPI 2.0 you have one-sided communication, where one worker can essentially reach into another's memory.

我同意Hristo的评论,即使用MPI提供的更高级别的API要比使用“本机”IB库容易得多。澄清一下,MPI没有强制执行主从。一旦所有的过程都完成并且有了一个沟通者,你就拥有了世界上所有的灵活性。任何人都可以向任何人发送数据。在MPI 2.0中,你有一种片面的交流,一个工作人员可以接触到另一个人的记忆。

#2


0  

...I cant find any API for C/C++ or .Net that would allow to use InfiniBand as transport for my application. So my question is simple how to use InfiniBand to connect to other nodes and send data packets to them and receive on other end?

…我找不到任何用于C/ c++或。net的API,可以使用InfiniBand来传输我的应用。所以我的问题很简单,如何使用InfiniBand连接到其他节点,发送数据包给它们,并在另一端接收?

The C API for direct access to InfiniBand is known as 'verbs'.

直接访问InfiniBand的C API被称为“verb”。

Among the numerous resources online to introduce this topic, I found http://blog.zhaw.ch/icclab/infiniband-an-introduction-simple-ib-verbs-program-with-rdma-write/ to be relatively approachable.

在众多的网上介绍这个话题的资源中,我找到了http://blog.zhaw。ch/icclab/infiniband- introduction-simple- ibm - words -program-with-rdma-write/相对容易接近。

The ultimate authority on InfiniBand software is OpenFabrics. The OFED website links docs and downloads.

InfiniBand软件的终极权威是openfabric。OFED网站链接文档和下载。

I noticed under "OFS for Windows" that there is a link to Overview of Network Direct Kernel Provider Interface (NDKPI), which might meet your needs, but I have never used it because I do not use Windows.

我在“OFS for Windows”中注意到,有一个链接可以查看网络直接内核提供者接口(NDKPI),它可能满足您的需要,但我从未使用过它,因为我不使用Windows。