Using Bentley Map v8i SS4 we had to move our XML file parsing and element creation code to VBA from C# due to very slow execution. Now we are migrating to OpenCities Map Connect Edition, and the same VBA code performance is about 1.7x slower for the very same input, configuration and machine. Why? Can we expect any performance tuning soon or later? Thanks!
it seems to me there are two questions in your post:
Bentley Accredited Developer: iTwin Platform - AssociateLabyrinth Technology | dev.notes() | cad.point
I will separate xml and element creation benchmarking soon to determine if it is an overall VBA perfomance problem under OCMCE or only a Bentley API issue. I cannot provide the whole huge code, but here is some key fragments:
Set xmldoc = CreateObject("Microsoft.XMLDOM")
XMLFileName = strArgs(0)
Set subnodes = TitleNodes(i).SelectNodes("line_string")
Set detnodes = subnodes(0).SelectNodes("point")
ReDim linePoints(detnodes.Length - 1)
linePoints(k).X = CDbl(SetCommaStr(detnodes(k).getAttribute("x")))
linePoints(k).Y = CDbl(SetCommaStr(detnodes(k).getAttribute("y")))
Set newLine = CreateLineElement1(Nothing, linePoints)
Set newXData = newLine.GetXData1("xxx")
newXData.AppendXDatum msdXDatumTypeInt16, a
newXData.AppendXDatum msdXDatumTypeInt32, b
newLine.SetXData1 "xxx", newXData
Set newLine.level = GetItemlevel(newlevel)
Set newLine.LineStyle = GetItemlinestyle(newlinestyle)
newLine.Color = newcolor
newLine.LineWeight = newwidth
newLine.GraphicGroup = groupid
The code creates about 50000 elements (lines, shapes, ellipses, shared cells, texts) while parsing the XML file. For several items the performance drop is inconspicuous.
Could you tell me who is the VBA specialist at Bentley? Thanks!
FlexiTon ADT said:I will separate xml and element creation benchmarking soon
It would be great, because I see (at least) three different sets of operations and it's not clear whether (A) everything is slower or (B) some operation causes e.g. 90% of all slowness. In fact, from this reason (performance benchamrks and available analysis tools) VBA is very bad tool to implement anything bigger, especially when performance is priority.
Also the code benchmarking should be done using OCM and also using plain MicroStation, because in your code nothing is specifically OCM API, but OCM can be slower for the same functions because of more functionality at background.
FlexiTon ADT said:The code creates about 50000 elements (lines, shapes, ellipses, shared cells, texts) while parsing the XML file.
It seems like task that never should be done using VBA.
I do not know context, but my feeling is that to rewrite it to new NET API (I think some support for XDada exists in MStnPlatformNET) could be faster than to analyze the issue and to find solution / workaround.
FlexiTon ADT said:Could you tell me who is the VBA specialist at Bentley?
I have no idea, but if this issue is so important for you, use standard way and create Service Ticket. You can share the ticket number here to link official (service ticket) and community (this forum) worlds. Usually Artur Goldsweer answers VBA issues related MicroStation VBA. Because your code seems to use MicroStation API and not OCM API, maybe he will be able to help you.
Thank you for the fast reply and valuable comments. In the meantime I measured XML parsing and processing without any Bentley API calls, and even this is significantly (1.3x) slower in OCMCE than in BMv8i. So it seems to be a general OCMCE VBA (or pure VBA) problem. In BMv8i C# we added all elements at once to the model, of course, because ii is much more faster. However, when I checked OCMCE C# performance I got exception for that method... Anyway, it is not so important for a while because the slower OCMCE VBA is even faster than any C# in this case. XData: as far as I remember in C# it was the fastest way to add custom data. We still use C# as main code, just some parts were reimplemented in VBA due to slow performance compared to the old mjava implementation.
I know very well that the real solution for performance problems is native code C++ or new .NET API, but we need more time to migrate.
Thank you again!
FlexiTon ADT said:So it seems to be a general OCMCE VBA (or pure VBA) problem.
The speed is always an issue in VBA because of COM overhead. How big it is depends on many things, but I would expect there will be no substantial difference between 32bit and 64bit COM or VBA. But I am not expert in this area and fortunately I have been able to stay away from these technologies in the most of my projects even in V8 era (and to use native C/C++, C# for managed and C++/CLI to join these two worlds).
FlexiTon ADT said:In BMv8i C# we added all elements at once to the model, of course, because ii is much more faster.
I have not done any benchamrking, but it makes sense, because of to add more elements at once is still one transaction. It would be interesting to also measure difference between plain MicroStation and Bentley Map, because I assume BM B8 / OCM CE adds extra overhead.
FlexiTon ADT said:However, when I checked OCMCE C# performance I got exception for that method
For what method? When you receive an exception for any method and it worked in V8i, you should report it as bug plus optionally to discuss it in this forum (or in MicroStation Programming when the problem exists in plain MicroStation also).
It's not clear from you description how much from OCM API do you use, or you just use OCM but use standard MicroStation API without OCM features, so it's hard to make any recommendation, but you should be aware that anything like "V8 C#", which is based on Interop, can be still used and is fully supported, but there is much better, faster and richer NET API available.
FlexiTon ADT said: Anyway, it is not so important for a while because the slower OCMCE VBA is even faster than any C# in this case.
I do not believe it, but it's not clear what parts of APIs and what code do you use for such evalation.
There are a huge amount of XML parses available for NET, providing different set of features. Even the standard ones, delivered with NET Framework API (e.g. XmlReader) are not bad.
Also, I do not believer that new NET API, which is typically implemented as "fast wrapper to native API" can be slower than VBA.
FlexiTon ADT said:XData: as far as I remember in C# it was the fastest way to add custom data.
I do not know your application context and its history, but XData were originally designed to be used solely for AutoCAD compatibility. I found Don H. Fu this note MicroStation users may only need it for DWG round trip purpose. In rare occasions if someone ever needs it, he/she presumably knows XDATA perhaps because he/she has used it in his/her apps on AutoCAD, therefore he/she does not need much assistance for that to work on MicroStation.
Bentley published XData API, so technically it's fine to use it even in DGN, but strategically it's in my opinion quite bad idea to use it anywhere else than for AutoCAD compatibility. XData are special animals, maintained because it's required, but it does not follow Bentley concepts and priorities (XAttributes and EC data in MStn or XFM in BM).
To use something because "it's fastest way" can easily become nightmare pain in future.
MicroStation has offered a wide selection of how to add custom data to elements, where XAttributes are probably even older than XData, whereas EC Framework API is available from I guess V8i SS3 (so still many years).
When C# is not enough (and it was the reality in V8, but not anymore in CE), standard P/Invoke can be used to use native API (both Microsoft and Bentley recommended way how to solve situation when anything is missing in managed API). Or C++/CLI can be used to implement own wrapper (more complex, but very efficient).
FlexiTon ADT said:I know very well that the real solution for performance problems is native code C++ or new .NET API, but we need more time to migrate.
I both agree and disagree ;-)
To learn new API requires a lot of time, but after 3 years of CE existence there is available good knowledge base (in a form of SDK examples, not too many blogs and plenty of discussions) focusing typical operations and the most common issues, so the learning complexity has been decreased a bit.
Because new (NET) and old (Interop) APIs can coexist in parallel in the same code (despite of direct sharing data is not usually possible), I know about companies who replace any code that does not work (often because of a bug in product or API) by newly written code. The results are not nice, it can be more demanding to maintain them until everything old will be removed, but often they found out it was faster / cheaper than to try to find workarounds and to analyze / report / wait for fix in old APIs.
But every case and every migration is different with own priorities and issues.