This should hopefully be a pretty self explanatory post. What would be the best desktop setup to run PLAXIS 3D as fast as possible without being wasteful?
Currently I'm thinking:
AMD Ryzen 9 7900X 12 Cores (my understanding is PLAXIS 3D can't really make use of more than 8 cores, is this correct?)
Corasir RAM 2 x 16 gb 6000mhz CL30 (is there any benefit to faster RAM?)
For storage I was going to go with a 1tb or 2 tb NVMe SSD. Is there any benefit to going with PCIe Gen 5 over a cheaper Gen 4?
From my own experience with larger 3D modelling in Plaxis and other software:
- CPU: More than 8 cores are a good choice, but also the max GHz you can get from the core matters. As far as I know Plaxis has no problem with utilizing more than 8 cores during calculations and it is good to have more if you want to run more than one phase in parallel (e.g. when branching out with different scenarios or ULS checks). I am not sure if there is any significant difference between AMD and Inter CPUs. However, some calculation softwares tend to be optimized for calculations with Intel Xeon processors - I wonder myself how it is with Plaxis.
Note: Calculations are one thing and Plaxis seems to handle multiple cores well for that; however, things like creating geometry for staged construction, meshing, the calculations "in between" the main calculations (e.g. pore pressures based on phreatic levels) do not utilize stronger, multicore processors that well. I had a case of a larger 3D model where preparing for calculations between phases took about 10 min (with CPU barely utilized) and the main calculations took about 3 min per phase (that is where the CPU matters). Plaxis is far from being well optimized for large models.
- RAM: For larger models, 32 GB is a minimum in my opinion. The system and background processes will use up about 10 GB probably. A model with 200k-300k if you switch to Pardiso solver will use up about 15-20 GB. Unfortunately, Plaxis does not allow for running multiple instances and calculating few things at a time, but on another software I used in the past, I needed even 128 GB to run the calculations smoothly with two models running at the same time.
- Disk space: Larger models with results can take up 50-100 GB of space for a single model (unpacked). Rarely you will do the calculations once without the need for backing up the older files or alternative solutions. 1 TB SDD is good for storing the calculations you are currently working on, but for backups and archiving you might want an additional drive.
Many years ago we did a benchmark and we did not see that much of increase in performance above 8 cores. The difference was bigger when going from a few cores to more, but from 16 cores to 32 cores, I would not expect the same improvement in performance.
We do not have any benchmarks and we most certainly cannot test all the CPUs that exist out there.
If you can provide us some detailed statistics about your computational time, would be of big help to other users as well. Like, type of problem you solved (with Number of nodes and elements, model used, total number of phases), computer configuration details, cost involved. We can publish this data in our Community page (with your reference of course) if you consent.
No one is asking Bentley to test all CPUs, you just need to test the latest mainstream desktop cpus. You guys could easily offload that to a 3rd party company to run the benchmarks for you. It is very disappointing situation to have a software developer charging tens of thousands of dollars for a software package whilst providing zero benchmarking data. Has Bentley ever looked into hiring a company that specializes in benchmarking the performance of software on various hardware configurations?
Why should we, the users, provide Bentley with benchmarking data for free? It takes time and effort from our employees to produce that data which should be spent undertaking billable work.