Here at TotalSim, we have over 1400 cores, in a range of cluster configurations, crunching numbers 24/7. As a result choosing the right CFD hardware is essential, importantly the solution must be cost effective. For us, reducing the cost of CFD hardware allows us to pass on this cost saving to our customers but for others, in a competitive environment, reducing hardware costs means more computational resource within a given budget. This allows either more simulations or higher fidelity simulations to be carried out and ultimately better products on the market. Coupled with open source codes, low cost hardware is bringing computational modeling capability to smaller firms who may have traditionally been priced out of this area.
Selecting components for a build can be difficult, that’s why at TotalSim we carefully handpick and design our recipe with efficiency in mind. To make sure that we aren’t wasting a penny on our components, we need to ensure that none of the components we use aren’t over or under spec’d in relation to each other. This helps reduce bottlenecks and allows us to optimise the recipe for CFD. As in many situations, the performance is often limited by your weakest link. Determining which link needs improvement is key to a successful CFD hardware recipe. There are many considerations when designing a cluster recipe, to name a few:
- Memory (Clock speed, ECC/Non ECC, number of DIMMs)
- CPU (Clock speed, number of cores, number of memory channels)
- Interconnect (GB Ethernet? 10GB Ethernet? Infiniband?)
Through experience, we have found that memory can be a very large bottleneck on performance, the main cause for this being the number of memory channels available to the CPU for memory bandwidth. From testing we have found that it gets more difficult to balance the memory utilisation and prevent a bottleneck when there are more than 6 cores on a motherboard.
Another way that we are able to reduce the costs of our CFD hardware is because we put together our systems ourselves which, although time consuming, helps us to get to know our hardware recipe better and allows us to guarantee the quality of our cluster builds.
To illustrate the performance of a bespoke CFD solution vs. a generic HPC, we had an opportunity to benchmark one of our clusters against a system provided by HP, the specs of the two clusters can be seen below.
Intel Xeon E5-1650 v2
2 x Intel Xeon E5-2680 v2
4 x 8GB 1600MHz
8 x 16GB 1866MHz
Number of Nodes / 48 Cores
Note: A range of core/node distributions were tested on the HP cluster from using all cores per node (16 cores/node, 3 nodes) down to the best performing test of (12 cores/node, 4 nodes)
From the specifications, you would assume that the HP solution would provide significantly better performance. The HP cluster has more and faster memory, faster interconnect and can utilise more on-board communication. When benchmarking using a 62 million cell case on 48 cores we saw the following results:
Mesh Time (secs)
Solve Time (secs)
Job Duration (hours)
The HP cluster performed comparatively well during the solve stage but the meshing stage took significantly longer than the TotalSim cluster. Overall the total job time was faster with the TotalSim cluster, despite its apparently lower specification. Given the best performance on the HP cluster was achieved by reducing the number of cores/board by a quarter, the importance of designing the system as a whole to remove bottlenecks is clear.
The TotalSim cluster has double the nodes of the HP cluster however there are the same number of CPUs between 8 TotalSim nodes & 4 HP nodes due to the HP cluster using dual socket motherboards. The HP CPU (E5-2680 v2) is almost 3 times the price of the TotalSim CPU (E5-1650 v2) alone and highlights the potential savings that a bespoke, tailored CFD hardware solution can provide.
* Prices at the time of writing and based on Intel ARK Tray costs converted from USD to GBP, E5-1650 v2 coming to £378.96 and the E5-2680 v2 coming to £1119.99
- RT @BTVLEP: Really great to see how the @BTVLEP LGF investment is driving world leading innovation at the @SSEHub all within the @SilvTechC… about 3 days ago from Twitter for iPhone
- RT @TheISEH: ISEH’s @doctorloosemore met Rob Lewis from @TotalSimLtd at his amazing facility at @SilverstoneUK this morning - great to see… about 5 days ago from Twitter for iPhone
- An interesting site during last month, when heavy machinery, entered the Catesby tunnel for the first time. Steamin… https://t.co/nyzom2mQx4 about 5 days ago from Loomly
- No matter the corner, this is our favourite grand prix.. and it's back for five more years! #BritishGP #F1… https://t.co/GCl7lnEizM about 1 week ago from Loomly