Advanced gINT Tips


The following helpful tips for Advanced gINT users.

Project Data Import and Export: Overwrite Options

OVERWRITE OPTIONS
The Overwrite Option field of the Merge and Import for Text File command controls how the program will deal with data that are in both the current database and the external file. The options are:

  1. Never
  2. Empty fields
  3. Named fields
  4. Records
  5. Data sets

These are listed in order of increasing amounts of existing data that can be overwritten.

  1. Never (the default) will only add records whose Keys do not exist in the current database. Existing records will not be changed in any way. All options share this behavior for new records.

    For existing records, i.e. those whose keys match key values supplied in data rows in the source database or text file the following possibilities are supported:
  2. Empty Fields will write data from the source only if the corresponding field in the target is empty.
  3. Named fields will write data from a field named in the source wherever a matching field exists in the target, overwriting any contents of the target field.
  4. Records will also overwrite matching fields in existing records and will erase the contents of all fields in the target which are not named in the source.
  5. Data sets will replace entire sets of data; the resulting dataset will contain only data from the source file. A ""Data Set"" is defined by the key set of the parent table. For example in a table with the PointID, Depth key set a Data Set is all records with a particular PointID value, while for a PointID, Depth, Reading table a Data Set is all records that share a PointID and Depth.

Notes:

  1. If the target table allows duplicate keys, all source records will be added. The only option that will replace existing data is ""Data Set.""
  2. The distinction between the ""Records"" and ""Data sets"" option is illustrated in this example: Let's say in the SAMPLE table of the current database you have two boreholes with data, B-1 and B-2. Each has numerous samples. In the external file you have two boreholes of data in the SAMPLE table as well, B-2 and B-3. With both options all the data in both B-2 and B-3 will be merged from the external file. With the ""Records"" option any sample records that were in the current database but were not in the external file will still remain after the merge. With the ""Data sets"" option B-2 will only contain the records that were in the external file, that is, the ""Data sets"" option clears the entire data set before writing its data, the ""Records"" option only overwrites the records whose Keys correspond to its data.

See Also:
Search for Import or Export in Help

Project Data Import and Export: Correspondence Files (2)

Correspondence Files: If you are splitting data from a source table into two or more target tables, to specify which source records go to a particular target table, In the PointID row of the Target table, show the source as an HasData expression specifying the acceptable condition.

For example, the source has a single table ""LITHOLOGY"" with all Lithology-related data. Your target database has a ""SOIL_DESCRIPTION"" table which should receive only those records where there is a Description or a Graphic. The correspondence file would look like this:

SOIL_DESCRIPTION,LITHOLOGY

PointID,""<<HasData(<<LITHOLOGY.Description>><<LITHOLOGY.Graphic>>,<<LITHOLOGY.PointID>>)>>""
Depth,
Bottom,
Description,
Graphic,

When gINT finds PointID (the main Key) missing during the import, it will check for the presence of an Iif( ), IsNumeric( ), or HasData( ) in the source expression and thereby know that the record should be omitted and that the absence of a PointID is not a reportable error.

Project Data Import and Export: Correspondence Files (1)

Correspondence Files:
Sometimes when importing data you may want to write a default value into a field. This might be the case for a Units or Type field, for example.

The format in the correspondence file is

...
TargetField1,SourceField1
TargetField2,""""""any text to insert in this field in every record""""""
TargetField3,SourceField3
...

To explain the triple quotes: gINT treats the data on each side of the comma as a CSV field; if it is surrounded by quote marks we extract the contents and change doubled quotes ("""") to single characters (""). This allows the use of field names that have comma characters.

The result of this is to feed gINT a quoted text string when a field name is expected. Quoted text in this context is presumed to be data, and is written into the target field in every target record (where allowed by the overwrite option).

gINT also accepts single quotes (apostrophe, the unshifted quote key,) in this context, which may be easier to read. Thus the correspondence file could alternatively read:

...
TargetField1,SourceField1
TargetField2,""'text to insert'""
TargetField3,SourceField3
...

It is also possible to supply non-default values in the source data and simultaneously use the correspondence file to provide default values where the source field is empty. This applies where the target field is empty in the target database, as when you are importing new data records. The ""Overwrite option"" MUST be ""Empty Fields"".

Suppose the Target table has a field called ""Units"" and the Source table has a corresponding field called ""SourceUnits"". In the Source table a few records have ""Feet"" in the SourceUnits field. ""Inches"" should be written to all other records.

The ""Overwrite option"" MUST be ""Empty fields"".

In the Correspondence file do this:

...
Target,Source
Units,SourceUnits
Units,""""""Inches""""""
MoreTargets,MoreSources
...

As each record is written the contents of SourceUnits will be written to Units. Next the string ""Inches"" will be written to the same field, but only if the field is empty.

Ctrl+Shift+R

To bring a user system data item expression into the current text box, place the cursor within a <<User System Data.xxx>> and press Ctrl+Shift+R (read user system data item). <<User System Data.xxx>> will be replaced by the expression stored in Data Design:User System Data under the xxx name. This capability saves you the trouble of moving to Data Design:User System Data to view or modify an existing item. Note that this capability is NOT available in the User System Data application.

Data Design: Project Database, Reports--Automatic Field Replace

When field names are changed, on saving the table the Field Replace dialog appears and allows you to replace the old names with the new names in your reports. This also occurs on renaming tables and merging fields from another table within the same database. This works very well. A potential problem occurs when you need to move some fields, but not all, from one table to another. Example: Let's say you have the field Classification in the SAMPLE table and wish to move it to the LITHOLOGY table. To do so you would make LITHOLOGY the current table, invoke the "Table: Merge from current file..." menu selection, and select the SAMPLE table. All the non-duplicated sample fields will be merged into the LITHOLOGY table at the end of the field list. You would then delete all the merged fields except Classification and then save. Note: all the fields will still exist in the SAMPLE table so you will need to manually delete the Classification field from the SAMPLE table. The replace dialog would show the following replacement list:

Find Replace With

---------------------------- ----------------------------
SAMPLE.Depth LITHOLOGY.Depth
SAMPLE.Classification LITHOLOGY.Classification

You certainly want the 2nd replacement to occur but you don't want the 1st to occur except in entities where the SAMPLE.Classification is found since you still have entities where other SAMPLE data are specified and where SAMPLE.Depth is still appropriate. With these cases the program checks to see that "SAMPLE.Classification" (or any non-key "Find" field) exists in the entity before making the key replacement.

Snaps Shortcut in gIDRAW

To quickly access snaps in gIDraw applications, press Shift+Right Mouse Button. This brings up the snap menu at the cursor position.

ListBuildSepTrim

When putting a series of fields, functions, and/or expresions together to form a final text output, if the pieces are separated by the same delimiter, use the ListBuildSepTrim function. See Help for details.

IIf Expressions

You can replace occurrences of expressions like:

<<IIf(<<HasData(???)>>,a,b)>>

with just:

 

<<HasData(???,a,b)>>

 

This also applies to the IsNumeric, IsDate, and Like functions, that is, you don't need the surrounding "IIf" when using these functions if there is only one condition.

 

Preview Data from this Table Only

The Preview data from this table only property:

You will find this in the table properties dialog in Data Design:Project Database and in the page properties of in Input. This only has an effect when the Preview Report Type!Name property is set. When you preview from Input, if the check box for the Preview data from this table only property is not marked (default) then all the data that applies to the report selected for preview will be shown. If this check box is marked, only the data from the current table will be shown. This makes for a much faster preview.

Reading Key Field

Using The Reading Key Field As An Arbitrary Counter:

When working with a data table having the PointID, Depth, Reading key set, it is sometimes useful to use the Reading field just to establish the record sequence. Actual data may show no particular order, but the sequence may be significant, as for a graph. gINT supports this situation by allowing the Reading key field to be specified as a "Hidden counter". If this field property is checked (in the Data Design:Project Database application when focus is on the Reading field) the field will not be seen at Input time. But, when the data is saved, gINT will write in the appropriate values to maintain the order you have set up on screen.

 

CTRL+SHIFT+W

To write an expression to a user system data item, highlight the expression within a report and then press Ctrl+Shift+W (write user system data item) and a dialog will appear asking for the name of the item. Enter a name and click Ok. The item will be written to Data Design:User System Data and the highlighted text will be replaced with <<User System Data.xxx>>, where xxx is the item name. Note that this capability is NOT available in the User System Data application.

Delimit Data

If you have data that you wish to output on a report which has a leading label and a trailing unit, but you don't want to print the label or the unit if there is no data, use the DelimitData function instead of writing a complex expressions. Search Help for "DelimitData".

Project Data Import and Export: SQL Function Usage

Notes on the Sql function when used within an import correspondence file. Note that these are very advanced notes that cover rarely encountered situations. If you don't deal with data import, or correspondence files, or have no background in the Structured Query Language (SQL), don't be intimidated by what follows. You will probably never need to understand this. For those pushing the envelope of what gINT can do, read on, you will find a number of obscure but useful items.

Let's say you are importing data into gINT from a text file that has a table that stores some kind of test reading information. Let's call it TEST_READINGS_OLD. The gINT database into which you are importing the data has a corresponding table called TEST_READINGS_NEW. Let's call the parent table of TEST_READINGS_NEW TEST_PARENT_NEW. The database to be imported also has a parent table for TEST_READINGS_OLD called TEST_PARENT_OLD.

In TEST_READINGS_OLD there is a field called DATA_OLD whose values are always the same for any particular PointID and Depth. So one of the goals of the import is to move these redundant data in DATA_OLD into one field in the TEST_PARENT_NEW record which we will call DATA_NEW.

Finally, we will call the hole identifier key in the data to be imported "HOLE_ID" and the depth key field "TEST_DEPTH".

To accomplish the above you would need to use the Sql function in the portion of the correspondence file dealing with the parent tables:

TEST_PARENT_NEW,TEST_PARENT_OLD
...,...
...,...
DATA_NEW,<<Sql(Select [TEST_READINGS_OLD].[DATA_OLD] From [TEST_READINGS_OLD] Where (Len([TEST_READINGS_OLD].[DATA_OLD]) > 0) And ([TEST_READINGS_OLD].[HOLE_ID] = '<<TEST_PARENT_OLD.HOLE_ID>>') And

(CDbl([TEST_READINGS_OLD].[TEST_DEPTH]) = <<TEST_PARENT_OLD.TEST_DEPTH>>))>>

Some notes on this SQL:

a. The ACCESS SQL will return multiple records, one for each TEST_DEPTH in each HOLE_ID. The gINT Sql function will only return the first value, which is all we need.

b. The Len([TEST_READINGS_OLD].[DATA_OLD]) > 0 is a safety measure in case the first fields in the source file are empty.

c. The <<TEST_PARENT_OLD.HOLE_ID>> and <<TEST_PARENT_OLD.TEST_DEPTH>> values will be supplied by gINT before the ACCESS SQL is called.

d. When importing with a correspondence file, gINT builds a temporary ACCESS database where the all the field types are text (or memo if the data is too long to fit in a text field or the data contains a carriage return). Even though the TEST_DEPTH field is numeric, the ACCESS engine sees it as a text field and without the "CDbl" (convert to double precision numeric value) function the SQL will fail with a Type Mismatch error.