Best workbench specs for PLAXIS3d

This should hopefully be a pretty self explanatory post. What would be the best desktop setup to run PLAXIS 3D as fast as possible without being wasteful?

Currently I'm thinking:

AMD Ryzen 9 7900X 12 Cores (my understanding is PLAXIS 3D can't really make use of more than 8 cores, is this correct?)

Corasir RAM 2 x 16 gb 6000mhz CL30 (is there any benefit to faster RAM?)

For storage I was going to go with a 1tb or 2 tb NVMe SSD. Is there any benefit to going with PCIe Gen 5 over a cheaper Gen 4?

Parents
  • From my own experience with larger 3D modelling in Plaxis and other software:

    - CPU: More than 8 cores are a good choice, but also the max GHz you can get from the core matters. As far as I know Plaxis has no problem with utilizing more than 8 cores during calculations and it is good to have more if you want to run more than one phase in parallel (e.g. when branching out with different scenarios or ULS checks). I am not sure if there is any significant difference between AMD and Inter CPUs. However, some calculation softwares tend to be optimized for calculations with Intel Xeon processors - I wonder myself how it is with Plaxis. 

    Note: Calculations are one thing and Plaxis seems to handle multiple cores well for that; however, things like creating geometry for staged construction, meshing, the calculations "in between" the main calculations (e.g. pore pressures based on phreatic levels) do not utilize stronger, multicore processors that well. I had a case of a larger 3D model where preparing for calculations between phases took about 10 min (with CPU barely utilized) and the main calculations took about 3 min per phase (that is where the CPU matters). Plaxis is far from being well optimized for large models. 

    - RAM: For larger models, 32 GB is a minimum in my opinion. The system and background processes will use up about 10 GB probably. A model with 200k-300k if you switch to Pardiso solver will use up about 15-20 GB. Unfortunately, Plaxis does not allow for running multiple instances and calculating few things at a time, but on another software I used in the past, I needed even 128 GB to run the calculations smoothly with two models running at the same time.

    - Disk space: Larger models with results can take up 50-100 GB of space for a single model (unpacked). Rarely you will do the calculations once without the need for backing up the older files or alternative solutions. 1 TB SDD is good for storing the calculations you are currently working on, but for backups and archiving you might want an additional drive.

  • Many years ago we did a benchmark and we did not see that much of increase in performance above 8 cores. The difference was bigger when going from a few cores to more, but from 16 cores to 32 cores, I would not expect the same improvement in performance.

    We do not have any benchmarks and we most certainly cannot test all the CPUs that exist out there.

    If you can provide us some detailed statistics about your computational time, would be of big help to other users as well. Like, type of problem you solved (with Number of nodes and elements, model used, total number of phases), computer configuration details, cost involved. We can publish this data in our Community page (with your reference of course) if you consent.

  • No one is asking Bentley to test all CPUs, you just need to test the latest mainstream desktop cpus. You guys could easily offload that to a 3rd party company to run the benchmarks for you. It is very disappointing situation to have a software developer charging tens of thousands of dollars for a software package whilst providing zero benchmarking data. Has Bentley ever looked into hiring a company that specializes in benchmarking the performance of software on various hardware configurations?

    Why should we, the users, provide Bentley with benchmarking data for free? It takes time and effort from our employees to produce that data which should be spent undertaking billable work. 

  • It is understandable that not all CPUs can be tested, but it seems that as the number of elements in the model increases, the software becomes "slower" in terms of reaction and the progress of calculations (those outside the actual iterative process), and in the end it does not matter how good the machine is. In my case, I am running calculations on Dell Precision laptop with 8 core Intel Xeon W-11955M. 

    At this point I cannot provide many details as the project is still ongoing (and confidential) and I am still working on some updates to the model. The latest version had over 200k elements (a lot of effort put into refinement to speed up calculation time), but with few thousands of volumes (sequential excavation with slow advance rate and some layer intersections) and over a thousand of rock bolts and plates (after intersecting).

    The intersecting and generating geometry for staged construction alone takes about 3h, meshing up to 1h if enhanced meshing is used. Number of phases in staged construction is over 200 now. It takes about 15 min per phase (up to 3 days overall), out of which about 10-12 min goes for those kind of calculations:

    and only about 3-5 min for actual iterations utilizing the full power of the laptop. Long time of those 'in between' calculation activities seems to be caused by the number of volumes and reinforcing elements. Surprisingly, a lot of time is spend on generating pore-pressures (does not matter if based on phreatic or based on previous stages, or even if everything in the model is set to 'dry'). Even cutting out 20-30s on each phase could speed up the total calculation time by few hours. 

    Personally, I still like to use Plaxis and I do not see many alternative options (although some colleagues recommend RS and FLAC software to me), but as the models we need to use in practice get larger and more complex, if the software cannot handle them, it does not matter what we personally prefer. We need to use the tools that allow us to get the results. 

    By the way, as I already complain about unnecessary time needed to get the results, please consider adding to the Tunnel Designer an option to 'insert' steps instead of just add/delate options. For complex sequencing, it can take hours to update the sequence if even a single change in the middle of the sequence requires deleting half of the existing steps and creating them from scratch. 

Reply
  • It is understandable that not all CPUs can be tested, but it seems that as the number of elements in the model increases, the software becomes "slower" in terms of reaction and the progress of calculations (those outside the actual iterative process), and in the end it does not matter how good the machine is. In my case, I am running calculations on Dell Precision laptop with 8 core Intel Xeon W-11955M. 

    At this point I cannot provide many details as the project is still ongoing (and confidential) and I am still working on some updates to the model. The latest version had over 200k elements (a lot of effort put into refinement to speed up calculation time), but with few thousands of volumes (sequential excavation with slow advance rate and some layer intersections) and over a thousand of rock bolts and plates (after intersecting).

    The intersecting and generating geometry for staged construction alone takes about 3h, meshing up to 1h if enhanced meshing is used. Number of phases in staged construction is over 200 now. It takes about 15 min per phase (up to 3 days overall), out of which about 10-12 min goes for those kind of calculations:

    and only about 3-5 min for actual iterations utilizing the full power of the laptop. Long time of those 'in between' calculation activities seems to be caused by the number of volumes and reinforcing elements. Surprisingly, a lot of time is spend on generating pore-pressures (does not matter if based on phreatic or based on previous stages, or even if everything in the model is set to 'dry'). Even cutting out 20-30s on each phase could speed up the total calculation time by few hours. 

    Personally, I still like to use Plaxis and I do not see many alternative options (although some colleagues recommend RS and FLAC software to me), but as the models we need to use in practice get larger and more complex, if the software cannot handle them, it does not matter what we personally prefer. We need to use the tools that allow us to get the results. 

    By the way, as I already complain about unnecessary time needed to get the results, please consider adding to the Tunnel Designer an option to 'insert' steps instead of just add/delate options. For complex sequencing, it can take hours to update the sequence if even a single change in the middle of the sequence requires deleting half of the existing steps and creating them from scratch. 

Children
No Data