Fine. Give me more details about performance. Reducing switch is expected to reduce communication delay which is benifit to mpi. To what extend will the smaller delay improve the performance when running CFD application. Is this another advantage of ring topology infiniband network?
The performance improvement due to the reduced communication delay in a ring topology compared to a star topology depends largely on the specifics of your CFD application.
With the InfiniBand protocols, the primary factor that contributes to the delay is not the time it takes for the packet to traverse the physical network (which is typically less than a few microseconds), but the time it takes for the packet to get through the software stack. The overhead of software in sending and receiving messages can be a significant component of the total latency.
MPI-based CFD applications often require large scale, synchronized computing. If your application often needs to exchange small amounts of data between all nodes (such as boundary values or synchronization barriers), then minor latency improvements might accumulate and contribute to significant overall performance improvement.
However, if your CFD application is structured so that most of the time is spent in computation, and communication is relatively infrequent (typical for methods where each node can work on a discrete part of the problem for much of the time), then the improvement might be marginal.
Furthermore, if your CFD application is using collective communication, like MPI_Bcast or MPI_Reduce, or if it's using non-blocking sends/receives, then it may be intelligent enough to take advantage of the network topology to minimise communication delays.
So, it can potentially be an advantage of ring topology, but the magnitude of this advantage is highly dependent on the specifics of your CFD code. Thorough testing will be necessary to calculate the potential benefit in your own case.