Performance Enhancement - File Opening Time for MicroStation CONNECT Edition Update 13 Vs V8i SS4

I know that this has been briefly discussed in another post but @KarishmaAnthony please can you carry out the same comparison tests on a win7, i7 pc (3-4 years old), with 24GB RAM and a standard HDD, using a 1mb file. The origianl tsts are al your blog at https://communities.bentley.com/products/microstation/b/microstation_blog/posts/performance-enhancement---file-opening-time-for-microstation-connect-edition-update-13-vs-v8i-ss4

I would suggest that this is a more realistic test. Look forward to your revised tests.

Parents
  • Hi Stuart,

    you were faster, I thought about similar post  :-)

    I would suggest that this is a more realistic test.

    I agree completely. What posted is not bad "first try" but does not reflect reality of MicroStation users and contains some shortcomings. It would be nice to move the blogs from marketing-like optimistic formulations to exact engineering style.

    Look forward to your revised tests.

    , there are some advices how to make future performance blogs more realistic and better. They are valid both for Performance Enhancement - File Opening Time for MicroStation CONNECT Edition Update 13 Vs V8i SS4 blog and Performance Enhancement- DWG File Opening Time blog:

    • Do not create "marketing crap like" texts. We are engineers, working on construction and geospatial projects, where exact measurement and realistic view are required. Formulations like "Better performance is one of the key areas of focus for MicroStation CONNECT Edition." will just make people angry and will create negative feeling from any following information regardless whether they are correct or not. We all knows it's not true, Bentley ignored code quality and reported performance problems for years and the first version that tried to correct some mistakes was Update 11 (when after 2 years after release and let's say 6 years of development nobody used MicroStation because of number of bugs and overall slowness). It seems there is still a huge technical debt because of maybe underfinancing development, limited allocated resources with appropriate skills and prefering new features over correct implementation of existing ones. Until MicroStation CONNECT Edition does not provide the same performance (speed, GUI reaction etc.) as V8i does, it's not about "making performance better" but "we are fixing our own imperfections and bugs". Marketing people treat such situation like a disgrace, but engineering appreciate realistic identification of problems (because only then they can be solved).
    • Specify hardware better: In your specs some data, critical for performance evaluation, are missing, especially used disk specification. What type (HD, SSD), what connection (SATA, M.2...), what brand and model (there are huge differences)? Also to say "Windows 10 Enterprise" is not enough, there have been many Windows 10 builds released.
    • Use realistic hardware, not "perfect one": Only a fraction of users has Xeon on their desks. As Stuart wrote, use something with i5 or i7, 4 cores at max. Do the measurement on notebook makes also sense. 8 - 16 GB is average memory size I guess, I see anything over 24 GB only at users doing visualization or with very new HW.
    • Use realistic test data: As people in this thread mentioned, to focus on "hundreds of MB and over GB" files is nice to highlight changes in U13, but does not reflect benefits for 99% of users. We use typically files < 1 MB, as e.g. wrote, his median is even 50kB and largest under 1 MB. Another problem with your measurement there is no single information about references. in the discussion provided more valuable benchmarking because of references used.
    • Specify benchmark better: To design any benchmark properly to ensure it will be repeatable and will provide valid results is complex task (benchmarkings in SW development and testing are seriously complicated areas). From this perspective, what does it mean "We repeated the test 5-6 times for individual datasets"? For any professional benchmark, I expect there will be a script with defined start condtions and end conditions that will be run e.g. 10x where the best and the worst results are not used. When results are too different (e.g. standard error of the mean is too high), the benchmark has to be repeated. Because absolute numbers are not interesting here, but differences between versions (V8i vs CE version), I guess there is no big difference whether arithmetic mean or median is used. BTW start condition should be "MicroStation is in idle state" when the benchmark is started by opening the file, whereas end condition is "MicroStation returns back to idle mode".
    • Alwyas compare against V8i: In your blog about DWG, a comparison with V8i is missing. Why? Honestly, nobody is intested in increasing performance when compared with previous CE version. We need the same speed as V8i offers.

    With regards,

      Jan

Reply
  • Hi Stuart,

    you were faster, I thought about similar post  :-)

    I would suggest that this is a more realistic test.

    I agree completely. What posted is not bad "first try" but does not reflect reality of MicroStation users and contains some shortcomings. It would be nice to move the blogs from marketing-like optimistic formulations to exact engineering style.

    Look forward to your revised tests.

    , there are some advices how to make future performance blogs more realistic and better. They are valid both for Performance Enhancement - File Opening Time for MicroStation CONNECT Edition Update 13 Vs V8i SS4 blog and Performance Enhancement- DWG File Opening Time blog:

    • Do not create "marketing crap like" texts. We are engineers, working on construction and geospatial projects, where exact measurement and realistic view are required. Formulations like "Better performance is one of the key areas of focus for MicroStation CONNECT Edition." will just make people angry and will create negative feeling from any following information regardless whether they are correct or not. We all knows it's not true, Bentley ignored code quality and reported performance problems for years and the first version that tried to correct some mistakes was Update 11 (when after 2 years after release and let's say 6 years of development nobody used MicroStation because of number of bugs and overall slowness). It seems there is still a huge technical debt because of maybe underfinancing development, limited allocated resources with appropriate skills and prefering new features over correct implementation of existing ones. Until MicroStation CONNECT Edition does not provide the same performance (speed, GUI reaction etc.) as V8i does, it's not about "making performance better" but "we are fixing our own imperfections and bugs". Marketing people treat such situation like a disgrace, but engineering appreciate realistic identification of problems (because only then they can be solved).
    • Specify hardware better: In your specs some data, critical for performance evaluation, are missing, especially used disk specification. What type (HD, SSD), what connection (SATA, M.2...), what brand and model (there are huge differences)? Also to say "Windows 10 Enterprise" is not enough, there have been many Windows 10 builds released.
    • Use realistic hardware, not "perfect one": Only a fraction of users has Xeon on their desks. As Stuart wrote, use something with i5 or i7, 4 cores at max. Do the measurement on notebook makes also sense. 8 - 16 GB is average memory size I guess, I see anything over 24 GB only at users doing visualization or with very new HW.
    • Use realistic test data: As people in this thread mentioned, to focus on "hundreds of MB and over GB" files is nice to highlight changes in U13, but does not reflect benefits for 99% of users. We use typically files < 1 MB, as e.g. wrote, his median is even 50kB and largest under 1 MB. Another problem with your measurement there is no single information about references. in the discussion provided more valuable benchmarking because of references used.
    • Specify benchmark better: To design any benchmark properly to ensure it will be repeatable and will provide valid results is complex task (benchmarkings in SW development and testing are seriously complicated areas). From this perspective, what does it mean "We repeated the test 5-6 times for individual datasets"? For any professional benchmark, I expect there will be a script with defined start condtions and end conditions that will be run e.g. 10x where the best and the worst results are not used. When results are too different (e.g. standard error of the mean is too high), the benchmark has to be repeated. Because absolute numbers are not interesting here, but differences between versions (V8i vs CE version), I guess there is no big difference whether arithmetic mean or median is used. BTW start condition should be "MicroStation is in idle state" when the benchmark is started by opening the file, whereas end condition is "MicroStation returns back to idle mode".
    • Alwyas compare against V8i: In your blog about DWG, a comparison with V8i is missing. Why? Honestly, nobody is intested in increasing performance when compared with previous CE version. We need the same speed as V8i offers.

    With regards,

      Jan

Children