MicroStation CONNECT Update 4 recently introduced a new updated graphic engine, Quick Vision 5, which brings improved hardware acceleration when using modern video cards and a generally more optimised display engine.
Here are some question that some of you have been asking with their respective answers, we will keep this wiki growing with more details as we get more questions from you all.
The GPU tessellation that we do is only supported by cards that are DirectX 11.0 compatible or higher. Since the user’s card is DirectX 10 it unfortunately does not support our tessellation shaders.
QVision5 does however still have improvements in the area of spline surfaces that are stroked on the CPU. Our overall stroking algorithm for these surfaces has improved and has also been multi-threaded so the software(CPU) implementation is still faster than it was with QVision3.
Just about anything currently produced, it is DX11 compliant, should be an improvement in performance over an old DX10 graphic card. Even a generation or 2 old $50 consumer graphics has specs superior to those of a professional 6-9 years old, DX10 graphic card. Matching this with a CPU of the same generation means that any of one or two generation old system would deliver an improvement in performance.
We are indeed looking into putting more primitive stroking on the GPU wherever it makes sense and are also making improvements to the edged display modes as well. We are also looking at other ways to further improve hardware acceleration performances but it is too early to say anything definite.
Currently MicroStation does not benefit from DX12 and there is currently no speed benefit from running on Windows 10 versus Windows 7. When we do release a DX12 implementation though it will almost certainly run faster than our DX11 implementation (can't give a date yet) and it will only run on Windows 10 since Windows 7 does not support DX12. There are also other intangible benefits from running on Windows 10 as we would think that this is where graphics card manufacturers are putting the majority of their efforts for their drivers. As for Vega, we are assuming that the user is talking about AMD’s upcoming Vega GPU. It’s too early to tell whether anything it touts will be of specific benefit to us or not. Thinking about an upgrade, if the intention is to keep the new system for several years, we would consider a DX12 capable configuration.
As far as the load goes, it all depends on the balance of the GPU to the CPU and what types of geometry are being drawn. Most of our older implementations were designed to run on just about any type of hardware, so if you have a decent GPU running on a so-so CPU and you’re not doing any tessellation on the GPU, then yes, the usage of the GPU is going to be pretty minimal, and what there is of it is not going to be exposed much. Even without GPU tessellation though, if you’re drawing large meshes of data, then you’ll get a higher use out of the GPU. If you are drawing lots of little things, there’s more overhead per object, so a beefier GPU isn’t going to do a lot for you. When you start talking about newer hardware though, where we can do tessellation on the GPU, then the beefier the better. A model that is doing a lot of GPU tessellation can give you the opposite effect and cause your GPU to become the bottleneck, not the CPU. Ideally we want the CPU and GPU to be equally busy, but with so many hardware configurations possible it’s not possible to tune things for every situation.