In my lab, we have several servers used for the simulation programs, but they worked independently. Now I want to combine them to become a cluster using MPICH to make them communicate. But there exists a problem, which is that these servers have different OSs. Some of them are Redhat, and some of them are Ubuntu. And on the homepage of MPICH, I saw that download sites of these two different operating systems are different, so will it be possible to set up a cluster with different operating system? And how to do it?
在我的实验室中,我们有几台服务器用于模拟程序,但它们独立工作。现在我想将它们组合成一个使用MPICH进行通信的集群。但是存在一个问题,即这些服务器具有不同的操作系统。其中一些是Redhat,其中一些是Ubuntu。在MPICH的主页上,我看到这两种不同操作系统的下载站点不同,那么是否可以设置具有不同操作系统的集群?怎么做?
The reason why I don't want to reinstall these servers is that there are too many data on them and they are under used when I ask this question.
我不想重新安装这些服务器的原因是它们上面有太多数据,当我提出这个问题时它们正在使用中。
1 个解决方案
#1
1
It is not feasible to get this working properly. You should be able to get the same version of an MPI implementation manually installed on different distributions. They might even talk to each other properly. But as soon you try to run actual applications, with dynamic libraries, you will get into trouble with different versions of shared libraries, glibc etc. You will be tempted to link everything statically or build different binaries for the different distributions. At the end of the day, you will just chase one issue you run into after another.
让它正常工作是不可行的。您应该能够在不同的发行版上手动安装相同版本的MPI实现。他们甚至可能正确地互相交谈。但是,一旦你尝试使用动态库运行实际的应用程序,你将遇到不同版本的共享库,glibc等的麻烦。你会想要静态链接所有内容或为不同的发行版构建不同的二进制文件。在一天结束时,您将只追逐一个又一个遇到的问题。
As a side node, combining some servers together with MPI does not make a High Performance Computing cluster. For instance an HPC system has sophisticated high performance interconnects and a high performance parallel file system.
作为侧节点,将一些服务器与MPI组合在一起不会构成高性能计算集群。例如,HPC系统具有复杂的高性能互连和高性能并行文件系统。
Also note that your typical HPC application is going to run poorly on heterogeneous hardware (as in each node has different CPU / memory configurations).
另请注意,典型的HPC应用程序在异构硬件上运行不佳(因为每个节点都有不同的CPU /内存配置)。
#1
1
It is not feasible to get this working properly. You should be able to get the same version of an MPI implementation manually installed on different distributions. They might even talk to each other properly. But as soon you try to run actual applications, with dynamic libraries, you will get into trouble with different versions of shared libraries, glibc etc. You will be tempted to link everything statically or build different binaries for the different distributions. At the end of the day, you will just chase one issue you run into after another.
让它正常工作是不可行的。您应该能够在不同的发行版上手动安装相同版本的MPI实现。他们甚至可能正确地互相交谈。但是,一旦你尝试使用动态库运行实际的应用程序,你将遇到不同版本的共享库,glibc等的麻烦。你会想要静态链接所有内容或为不同的发行版构建不同的二进制文件。在一天结束时,您将只追逐一个又一个遇到的问题。
As a side node, combining some servers together with MPI does not make a High Performance Computing cluster. For instance an HPC system has sophisticated high performance interconnects and a high performance parallel file system.
作为侧节点,将一些服务器与MPI组合在一起不会构成高性能计算集群。例如,HPC系统具有复杂的高性能互连和高性能并行文件系统。
Also note that your typical HPC application is going to run poorly on heterogeneous hardware (as in each node has different CPU / memory configurations).
另请注意,典型的HPC应用程序在异构硬件上运行不佳(因为每个节点都有不同的CPU /内存配置)。