This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Fix for WaterGEMS SCADA Historian query after upgrade

This is a post to hopefully help someone else if they face a similar problem.

When I upgraded to WaterGEMS Connect 10.02.01.06, I had a problem where my SCADA signals were no longer being recognized. Symptoms varied depending on how I tried to troubleshoot (sometimes no data would show, sometimes data that was clearly wrong.)

After investigating the 'View SCADA Logs' results in the SCADA Signals menu, I discovered that my old query was not being interpreted the same way. Changing the query fixed the issue.

The old query (with linebreaks added for ease of viewing):

set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData
where (@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime)
and Quality = "Good NonSpecific"

The new query:

set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where
(Tagname=@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime)
and Quality = "Good NonSpecific"

The problem seems to be that the @requestedsignals variable used to expand out to include the fieldname=tagname, but now just expands to show the tagname. Adding 'Tagname=' fixed the issue.

Note that if @requestedsignals was expected to return multiple values (which it looks like it can, I'm not sure under what circumstances it would), I would experiment more with the query to use "IN" rather than "=", and probably move that outside the parentheses.

More details on how I've set up my SCADA signals to work with Historian can be found in a previous thread. Adding the 'set rowcount=5000' to limit the results fixed the issue I was experiencing in that thread.


communities.bentley.com/.../494394



Parents Reply Children
  • Hi Stephen,

    Our lead developer Wayne Hartell took a look at this and was not able to reproduce the problem. In one of your previous forum threads, Wayne had suggested using the "where Tagname=(@requestedsignals)" approach. See post from Wayne on December 11th, 2018 here.

    We tried the following three methods with versions 10.02.01.06 and 10.02.00.43 and they all worked without a problem (data was retrieved and displayed in SCADA Signals editor):

    Case #1: (Leaving @requestedsignals out of the query).

    set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where (TimeStamp>=@startdatetime("dd-MMM-yyyy HH:mm:ss")) and (TimeStamp<=@enddatetime("dd-MMM-yyyy HH:mm:ss")) and Quality = "Good NonSpecific"

    Case #2: (Tagname=(@requestedsignals)).

    set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where Tagname=(@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime) and Quality = "Good NonSpecific"

    Case #3: (@requestedsignals))

    set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where (@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime) and Quality = "Good NonSpecific"


    Can you confirm which version you're currently using, and that we did not miss anything in the tests done above?


    Regards,

    Jesse Dringoli
    Technical Support Manager, OpenFlows
    Bentley Communities Site Administrator
    Bentley Systems, Inc.

  • Jesse,

    I'm now using WaterGEMS Update 2: 10.02.01.06 32-bit.

    For testing purposes, I'm using an empty model and simply setting up the SCADA signals. See screenshots of the results below.

    Case #2 works for me with no issues, and is very fast, and returns about 34 rows in my test. If I do nothing but switch the SQL query to either Case #1 or Case #3, when I press Refresh there is a long delay (about a minute if it is set to 1 day of data, longer if the date range is wider). Then it returns very obviously wrong data that is hitting the limit of 5000 rows. The results appear to be the same for either Case #1 or #3.

    The Date/Timestamp has about 30 or so rows for on the hour (12:00:00), and then one row for each minute up to the next hour, and then another 30 or so rows on that hour (1:00:00), and so on. The Signal row has random data with values up to 20,000,000.

    The SCADA logs have this message:

    2019-05-28 16:00:00.05: [INF]: Setting HistoricalSelectStatement for 'SCADA Datasource - 1' to 'set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where (@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime) and Quality = "Good NonSpecific"'.

    2019-05-28 16:00:39.01: [INF]: GetColumnDataType obtained ColumnDataType::Unusable for signal: 'Value'.

    EDIT: The logs may be a red herring, because I get the same 'Unusable for signal' warning when using Case #2.

    And to be clear, this is not a production issue for me, because Case #2 is working fine, so I've simply switched my queries over to that format. 

    Case #2:

    Case #2

    Case #1 & Case #3:

    Case #1

  • Hi Stephen,

    Thanks for providing this information. I took an additional look and am still unable to reproduce the same behavior as you, but the good thing is that you only need one of the query formats to work, and so I acknowledge your point that this is not a production issue.

    Still, it's good for us (if possible) to understand why there are differences so we can correct any problems on our end if they are found to exist. At the current time my best guess is that there is potentially some difference between the GE Historian versions we are both using. Our test setup is likely a bit old (version 5.5.x, circa 2013). I'm not sure if that is the reason. I'll see if it's possible for use to get hold of a newer version so we can increase the breadth of our test coverage.

    Regarding the log output line:

    2019-05-28 16:00:39.01: [INF]: GetColumnDataType obtained ColumnDataType::Unusable for signal: 'Value'.

    I can confirm that this is a red herring. What is going on here is that we query the database and get it to tell us the data type of each field. For some reason GE tells us that the type of the "Value" field is "object" even though it seems to be numeric. We log that to the log file, but actually we don't prevent a user selecting that field as the "Value Field", just in case it really does hold the right kind of data like it does here. Then internally we try to coerce the data into the right type when we read it. It all works out OK if the field really does hold the right kind of data. We learned the hard way over the years that expecting various databases to report their datatypes correctly could result in cases where usable fields were being prevented (by our user interface) from being selected; that's when we decided just to log potential issues, but allow users to use any fields that they wanted. Hopefully that makes some kind of sense!

    Kind Regards,

    Wayne.



  • Thanks for looking into this! Our Historian version is older than yours (4.0.0.176) because we have some legacy dependencies, so that could actually be the culprit. We've got an upgrade in the works, but that's probably a ways out still.