PlantSuite RDBMS to PI - OSIsoft
PlantSuite
RDBMS to PI Interface
via ODBC Driver
for Windows NT
(Intel, ALPHA)
Version 2.14
How to Contact Us
|Phone |(510) 297-5800 (main number) |
| |(510) 297-5828 (technical support) |
|Fax |(510) 357-8136 |
|Internet |techsupport@ |
|World Wide Web | |
|Bulletin Board |(510) 895-9423 |
| |Telebit WorldBlazer modem (Hayes, MNP, or PEP compatible) |
| |8 data bits, 1 stop bit, no parity, up to 14400 bps download |
| |protocols: Xmodem, Ymodem, Zmodem, Kermit |
|Mail |OSI Software, Inc. | |
| |P.O. Box 727 | |
| |San Leandro, CA 94577-0427 | |
| |USA | |
| | | |
| |OSI Software GmbH |OSI Software, Ltd |
| |Hauptstra(e 30 |P. O. Box 8256 |
| |D-63674 Altenstadt 1 |Level One, 6-8 Nugent Street |
| |Deutschland |Auckland 3, New Zealand |
Unpublished -- rights reserved under the copyright laws of the United States.
RESTRICTED RIGHTS LEGEND
Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii)
of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013
Trademark statement—PI is a registered trademark of OSI Software, Inc. Microsoft Windows, Microsoft Windows for Workgroups, and Microsoft NT are registered trademarks of Microsoft Corporation. Solaris is a registered trademark of Sun Microsystems. HP-UX is a registered trademark of Hewlett Packard Corp.. IBM AIX RS/6000 is a registered trademark of the IBM Corporation. DUX, DEC VAX and DEC Alpha are registered trademarks of the Digital Equipment Corporation.
PSrdbms_2.14.doc
( 2000 OSI Software, Inc. All rights reserved
777 Davis Street, Suite 250, San Leandro, CA 94577
Table of Contents
Introduction 5
Functionality 7
Input of Data from a Relational Database into PI 8
Input Query strategies 8
Output of Data from PI into a Relational Database 11
SQL Statements 13
Language Requirements 13
SQL Placeholders 14
Mapping of Placeholder Data Types to SQL Data Types 17
Timestamp Format 18
Inputs to PI via the SELECT Clause 19
Data acquisition strategies 22
SQL SELECT Command for Retrieving Data for Single PI Tag 22
SQL SELECT Command for Retrieving Data for Tag Groups 23
SQL SELECT Command for Tag Distribution via Tagname Key 25
Event based Input 28
Multistatement SQL Clause 28
Stored Procedures 29
Outputs from PI via Update and Insert Clause 30
Data Output for DIGITAL Points 31
Data Output for INTEGER Points 32
Data Output for REAL and String Points 33
Global Variables 33
Data Mapping between PI and RDBMS 35
Mapping of SELECT Data Types to PI Point Types – Data Input 35
Evaluation of STATUS Field for All PI Data Types - Input from RDBMS 37
Storage of PI POINT Database Changes 39
PI Batch Database Output 43
Database specifics 44
Oracle 7.0 44
Oracle RDB 44
Oracle 8.0 45
dBase III, dBase IV 45
MS Access 45
MS SQL Server 6.5 46
MS SQL Server 7.0 46
PI Point Configuration 47
Performance Point 52
IO Rate Tags 53
Interface Files 55
Installation 56
Updating the Interface 57
Startup 58
Command line switches for the RDBMS to PI interface 58
Detailed explanation for command line parameters 59
Startup as console application 63
Startup as Windows NT Service 64
PILOGIN.INI 64
Shutdown 66
Error and information messages 67
Hints for PI System Manager 68
Interface Test Environment 69
More Examples 71
Insert or Update 71
Limitations and future enhancements 72
Introduction
The Interface allows bi-directional transfer of data between the PI System and any relational database that supports ODBC (Open Database Connectivity) drivers. The interface runs under Windows NT Server or Workstation, Intel or ALPHA Platform. This machine runs either a PI3 Server Node or the PI-API Node and can connect to any PI Node in the network.
The PI Interface makes internal use of the PI-API-NT in order to keep a standard way of interfacing from a client node to the PI Server Node.
This version of the Interface only supports one ODBC connection per running copy (but it is possible to use multiple instances of this connection). SQL statements are generated by the end user in form of ASCII files or are defined in the Extended Descriptor of a PI Tag. SQL statements can serve as a source of data for one or more PI Tags defined in the PI database.
The Interface generates exception reports for all associated tags.
The following Relational Databases were explicitly tested with the interface:
Oracle RDB, Oracle, MS SQL Server, DB2, MS Access, dBase
For RDB version details refer to the section “Interface Test Environment
Databases and ODBC drivers not yet tested with our interface may require additional onsite testing which will translate to additional charges. Please refer to the section entitled “Interface Test Environment
” for a list of databases and ODBC drivers that the interface is known to work with. Even if your database and/or ODBC driver is not shown, the interface still may work. However, if you experience problems, the interface will need to be enhanced to support your environment. Please contact your OSI sales representative.
|Supported Features |
|Order Code |PI-IN-OS-RELDB-NT |
| |PI-IN-OS-RELDB-NA |
|Interface Platforms supported |Windows NT 4 or higher |
| |(Intel, ALPHA) |
|Vendor Software Required |Yes |
|Vendor Software minimum requirements |ODBC 3.x Driver Manager |
| |Level 1 ODBC API |
| |MINIMUM Grammar |
|Sign up for Updates |Yes |
|Exception Reporting |Yes |
|PI API Node Support |Yes |
|UNIINT |No |
|Input |Scan based, Event Trigger |
|Outputs |Event based |
|Text Transfer |Yes |
|Configuration Data |Output |
|Multiple Links |Yes |
|Failover |No |
|History Recovery |Yes |
|Number of Points |Unlimited |
Functionality
The Interface runs on Windows NT operation system as a Console Application or as Service. It uses the standard PI-API-NT to connect to the PI Server node. The Interface Computer must have the related ODBC driver installed and configured to be able to connect to the specified database. A DSN (Data Source Name) must be configured via the ODBC Administrator. The DSN name is passed in the startup arguments of the Interface. In case of ODBC connection failure the Interface will try to reconnect to the RDBMS (see specific note in the Management section).
SQL queries are provided by the user in form of either ASCII files, which are read after startup of the Interface into the memory, or as definition in the Extended Descriptor of a tag. SQL statements are executed according to the scan class type (cyclic or event driven). When data is read from a relational database, the Interface will try to convert the result set of the query into a value, status or timestamp appropriate for the related PI Tag. The opposite direction - writing data out of the PI system works accordingly.
In the current version the Interface we support the following general features:
❑ Support of Timestamp, Value, Status and String fields in RDB Tables
❑ Support of string tags
❑ Query data (read) for single tag, one value per scan, with or without timestamp from RDB
❑ Query data (read) for single tag, time series per scan
❑ Query data (read) for multiple tags (Tag Group), one value per tag per scan, with or without timestamp from RDB
❑ Query data (read) for multiple tags (Tag Group), time series per tag per scan
❑ Query data (read) via Tagname Key (Tag Distribution), time series per tag per scan
❑ Scan or Event based SELECT queries
❑ Event based UPDATE DELETE and INSERT queries
❑ Support of multiple statements per query
❑ Support of stored procedures
❑ Support of ‘runtime placeholders’ Timestamp (Scan Time, Snapshot Time,...), Value, Status
❑ Support of all classic ‘point attribute’ placeholders (except userint and userreal)
❑ Support of placeholders for Value, Status, Timestamp of foreign Tag (Tag outside interface point source)
❑ Support of ‘batch’ placeholders
❑ Storage of point attribute (type classic) changes in RDB
❑ Recovery options for output points
Input of Data from a Relational Database into PI
The SQL statement can be considered as a source of both, timestamp and value for the PI Point. Each SQL statement retrieves data from RDB and the Interface sends the results into the PI database - to the particular point. There can be multiple statements in one query as well as there are different strategies used for transforming the result set.
The Interface can also define a group of tags, which share one SQL statement and the result set of the query will be distributed to all tags in a group (see chapter “SQL SELECT Command for Retrieving Data for Tag Groups”). It reduces the number of the ODBC calls and thereby increases performance.
Input Query strategies
Query for single tag – one value per scan
There are DCS systems that can keep current values in a relational table. Via scan based queries it is possible to emulate the same behavior like a scan based DCS interface. An example is getting data from an ABB IMS station.
The disadvantages for this kind of data retrieval are low performance and accuracy limited to scan frequency.
Example 1.1 – query single tag:
|SQL Statement |
|(file PI_REAL1.SQL) |
|SELECT PI_TIMESTAMP,PI_VALUE,PI_STATUS FROM PI_REAL1 WHERE PI_KEY_VALUE = ?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=”Key_1234” |1 |0 |0 |1 |0 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_REAL1.SQL |Float32 | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUE |PI_STATUS |PI_KEY_VALUE |
|Datetime |Real |Smallint |Varchar(50) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Number-Single Precision |Number-Whole Number |Text(50) |
|(MS Access) |(MS Access) |(MS Access) |(MS Access) |
Query for single tag – multiple values per scan
A good strategy for high data throughput is to have low scanning rates (e.g. 1 minute) instead of doing one query every second.
This assumes that we are not scanning updated records (values that went into the RDB via UPDATE statements and that do overwrite the current value).
Instead we assume that we can query the RDB for a time period (e.g. 1 minute) and expect that new data (stored by INSERT statement) appear there.
In other words getting the same amount of data in one call is faster than getting it in many calls.
A typical high throughput query is given below. In this example we get all data since ‘snapshot’ time.
Example 1.2 – query data array for single tag:
|SQL Statement |
|(file PI_STRING1.SQL) |
|SELECT PI_TIMESTAMP,PI_VALUE,0 FROM PI_STRING1 WHERE PI_TIMESTAMP > ?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=TS |1 |1 |0 |1 |0 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_STRING1.SQL |String | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUE |PI_STATUS |
|Datetime |Varchar(1000) (MS |Smallint |
|(MS SQL Server) |SQL Server) |(MS SQL Server) |
|Date/Time |Text(255) |Number-Whole Number (MS Access)|
|(MS Access) |(MS Access) | |
A typical low throughput query is:
SELECT TIMESTAMP, VALUE, STATUS FROM DATA WHERE NAME= ?;
P1=AT.TAG, Location2=0
Here we get only the current value. The interface works very similar to an online DCS interface.
Note: SQL syntax and parameter description is given later in this manual.
Query data in Tag Groups
Another way of improving performance compared to single value read is grouping tags together. This might be possible if the data is somehow related, e.g. when transferring LAB data and all data come from the same sample and have the same timestamp or if the RDB table is structured in the way that multiple values are stored for the same record (table has multiple value fields).
Querying Tag Groups can also be combined with getting complete time series per scan. The limitation is in the fact that a timestamp field must be present and that only one timestamp is available per data record.
The particular group is formed by tags, using the same Instrumenttag point attribute.
Example 1.3 – three points used are of PI data type Int32:
|SQL Statement |
|(file PI_INT_GROUP1.SQL) |
|SELECT PI_TIMESTAMP, PI_VALUE1, PI_STATUS1 ,PI_VALUE2,PI_STATUS2, PI_VALUE3,0 FROM PI_INT_GROUP1 WHERE PI_TIMESTAMP >|
|?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
| |All points |All points | |All points |All points |
|P1=TS |1 |1 |Target point1 2 |1 |0 |
| | | |Target point2 4 | | |
| | | |Target point3 6 | | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_INT_ |Int32 | | | | |
|GROUP1.SQL | | | | | |
| | | | | | |
| RDB Table Design |
|PI_TIMESTAMP |PI_VALUEn |PI_STATUSn |
|Datetime |Smallint |Smallint |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Number-Whole Number (MS Access) |Number-Whole Number (MS |
|(MS Access) | |Access) |
Tag Distribution
Compared to Tag Groups where grouping happens in form of columns, Tag Distribution means multiple records per query while each record can contain data for a different tag. To achieve this, a field must be provided in the query which contains the tagname or the alias tagname.
This option is very efficient for getting exception-based data from an RDB table where it is unknown how many data will arrive per single tag. We only know that there will typically be an average number of data records per scan.
Only one ‘distributor’ point carries the SQL statement. This point does not receive any actual data, instead the number of rows successfully delivered to according PI tags get stored in the distributor tag.
The target points are selected either according to Tagname (value retrieved in PI_NAME column should match the Tagname of the point) or according to /ALIAS=Some_key definition found in the Extended Descriptor of the particular point.
Example 1.4 – query database for tags, matching a wildcard:
|SQL Statement |
|(file PI_REAL_DISTR1.SQL) |
|SELECT PI_TIMESTAMP, PI_NAME, PI_VALUE, PI_STATUS FROM PI_REAL_DISTR1 WHERE PI_NAME LIKE ‘Key_%’ ; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|‘Distributor’ |All points |All points | |All points |All points |
|P1=TS |1 |0 |‘Distributor’ -1 |1 |0 |
| | | |‘Target points’ 0 | | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|‘Distributor’ |All points | | | | |
|PI_REAL_ |Float32 | | | | |
|DISTR1.SQL | | | | | |
| | | | | | |
| RDB Table Design |
|PI_TIMESTAMP |PI_VALUE |PI_STATUS |PI_NAME |
|Datetime |Real |Varchar(12) |Varchar(80) (MS|
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |SQL Server) |
|Date/Time |Number-Single Prec.(MS |Text(12) |Text(80) (MS |
|(MS Access) |Access) |(MS Access) |Access) |
Output of Data from PI into a Relational Database
Moving data from PI to a relational database is accomplished similar to input. The relational database can receive timestamp, value and status of a PI point as well as all actual values of attributes addressable by placeholders (see chapter “SQL Placeholders” ).
For copying new data to the relational database, usage of standard event based output points are supported. In this case the source tag should be provided as a source of actual data and the output point itself gets the copy of the exported data to verify output operation.
If the output operation reports a failure (ODBC SQLExecute() fails), the output point gets a status of Bad Output.
Note: Besides Batch Database Output, all output tags are event based output. If scan based output is required, an equation tag can provide frequent events. Another alternative is to use a stored procedure and use a scan based input tag.
Example 2.1 – insert 2 different sinusoid values into table:
|SQL Statement |
|(file PI_SIN_VALUES_OUT.SQL) |
|INSERT INTO PI_SIN_VALUES_OUT (PI_NAME1, PI_TIMESTAMP1, PI_VALUE1, PI_STATUS1, PI_NAME2,PI_VALUE2,PI_STATUS2) VALUES |
|(?,?,?,?,?,?,?); |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|/EXD=…path…\ |1 |0 |0 |0 |0 |
|pi_sin_values_out.plh | | | | | |
|Content of the above stated file:| | | | | |
|P1=AT.TAG P2=TS P3=VL P4=SS_I | | | | | |
|P5='SINUSOIDU'/AT.TAG | | | | | |
|P6='SINUSOIDU'/VL | | | | | |
|P7='SINUSOIDU'/SS_I | | | | | |
| | | | | | |
|Instrumenttag |Pointtype |Sourcetag | | | |
| |All points | | | | |
|PI_SIN_VALUES_ |Float16 |SINUSOID | | | |
|OUT.SQL | | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMPn |PI_VALUEn |PI_STATUSn |PI_NAMEn |
|Datetime (MS |Real |Smallint (MS |Varchar(80) |
|SQL Server) |(MS SQL Server) |SQL Server) |(MS SQL Server) |
|Date/Time |Number-Single Precision (MS |Number Whole Number (MS |Text(80) (MS |
|(MS Access) |Access) |Access) |Access) |
SQL Statements
SQL statements can be generated out of ASCII files, placed in the directory specified by /SQL=path keyword found in the start-up file. Names of these files are arbitrary (the recommended form is ‘filename.SQL’). The name of an associated ASCII file for a particular point is stated in the Instrumenttag attribute. When the Instrumenttag field is blank, the Interface will look for the SQL statement definition in the Extended Descriptor denoted by the keyword /SQL = “valid SQL statement;“. Each file can contain a sequence of SQL commands separated by the ‘;’ separator and all these particular statements will be examined consecutively.
Here is an example of such a definition:
/SQL=“SELECT PI_TIMESTAMP,PI_VALUE,0 FROM TABLE1 WHERE PI_TIMESTAMP>?“ P1=ST
Note: The entire statement definition text has to be double-quoted (“ ”).
Note: The Extended Descriptor attribute is limited to 80 characters. Therefore only short statements can be given by this method.
Language Requirements
The level of conformance of the used ODBC API is checked while the Interface starts-up.
The Interface requires the ODBC driver to be least of Level 1 API conformance and used SQL statements should be of the MINIMUM Grammar conformance. The information of the conformance levels (both API and Grammar) are written into the log-file. If the API conformance of some ODBC driver is less then Level 1, the Interface stops.
The Interface supports the following Data Manipulation Language commands that can be used in SQL queries:
SELECT [ALL,DISTINCT] select-list
FROM table-reference-list
[WHERE search-condition]
[order-by-clause]
INSERT INTO table-name [(column-identifier[,column-identifier]...)] VALUES (insert-value[,insert-value]...)
UPDATE table-name SET column-identifier ={expression} [,column-identifier ={expression}]...[WHERE search-condition]
DELETE FROM table-name [WHERE search-condition]
The check-up of SQL statements against a relational database is expected to be done by the user. If the syntax of some SQL statement is not proper, the Interface writes down an error message and the tag (and each related tag e.g. tags which share the same SQL command) is excluded from the interface list.
The Interface allows testing of a particular tag against the relational database, having the /TEST=Tagname keyword in the start-up file
(See chapter “Detailed explanation for command line parameters ”).
Note: It is highly recommended to test a new query for the interface with MS Query first.
The query produced by MS Query can mostly directly be stored in the query file, required by the interface. Current versions of MS Query also support placeholders (‘?’), so that even complex queries can graphically be produced and tested before used in the RDBMS interface.
SQL Placeholders
The concept of placeholders allows passing runtime values, timestamps of various formats and other configurable parameters (point attributes defined in PIPONT database) into the SQL statement on places marked by ‘?’. Question marks are only allowed on places in the SQL statement where they are syntactically correct e.g. in the search condition part of the SQL SELECT command, in the argument list of stored procedures, e.t.c. Placeholders are defined in the tag’s Extended Descriptor and should be separated by spaces. The assignment of runtime values (retrieved from the PI system) to the SQL statement is done according to the position of the placeholder. That means that the first definition of a runtime placeholder refers to the first question mark found in the SQL statement. If more space (more than 80 characters) is required for placeholder definition, a new keyword “/EXD=filename” was introduced. This construct allows extending the Extended Descriptor with the contents of the specified file.
The list and syntax of placeholder definitions is as follows:
|Placeholder Keywords for |Meaning / Substitution in SQL query |Remark |
|Extended Descriptor | | |
|Snapshot placeholders | | |
|Pn=TS |Timestamp taken from the PI Snapshot (detailed | |
| |description see chapter “Timestamp Format ”) | |
|Pn=LST |Timestamp, Last Scan-Time | |
| |(Scan Time = start time of this scan class) | |
|Pn=ST |Input: Timestamp = Start of a new scan for all | |
| |members of this scan class | |
| |Output: Timestamp = Time of the output event | |
|Pn=LET |Timestamp, Last Execution Time, | |
| |Execution Time = exact time when query finished | |
| | | |
| |This time is different to LST depending on how much | |
| |time passed between start of scan for the class and | |
| |execution of this particular tag. | |
| |Since queries can be very time consuming, this time | |
| |difference should not be underestimated. | |
|Pn=VL |Current value | |
|Pn=SS_I |Current status in the form of integer representation| |
|Pn=SS_C |Current status in the form of digital code string |Max. 12 characters |
|Pn=’tagname’/VL |Current value of the tag ‘tagname’ | |
|Pn=’tagname’/SS_I |Current status of the tag ‘tagname’ | |
|Pn=’tagname’/SS_C | |Max. 12 characters |
|Pn=’tagname’/TS |Timestamp taken from the PI Snapshot of the tag | |
| |‘tagname’ | |
|PI point database placeholders | | |
|Pn=AT.TAG |Tag name of the current tag |Max. 80 characters |
|Pn=AT.DESCRIPTOR |Descriptor of the current tag |Max. 26 characters |
|Pn=AT.EXDESC |Extended Descriptor of the current tag |Max. 80 characters |
|Pn=AT.ENGUNITS |Engineering units for the current tag |Max. 12 characters |
|Pn=AT.ZERO |Zero of the current tag | |
|Pn=AT.SPAN |Span of the current tag | |
|Pn=AT.TYPICALVALUE |Typical value of the current tag | |
|Pn=AT.DIGSTARTCODE |Digital start code of the current tag | |
|Pn=AT.DIGNUMBER |Number of digital states of the current tag | |
|Pn=AT.POINTTYPE |Point type of the current tag |Max. 1 character |
|Pn=AT.POINTSOURCE |Point source of the current tag |Max. 1 character |
|Pn=AT.LOCATION1 |Location1 of the current tag | |
|Pn=AT.LOCATION2 |Location2 of the current tag | |
|Pn=AT.LOCATION3 |Location3 of the current tag | |
|Pn=AT.LOCATION4 |Location4 of the current tag | |
|Pn=AT.LOCATION5 |Location5 of the current tag | |
|Pn=AT.SQUAREROOT |Square root of the current tag | |
|Pn=AT.SCAN |Scan flag of the current tag | |
|Pn=AT.EXCDEV |Exception deviation of the current tag | |
|Pn=AT.EXCMIN |Exception minimum time of the current tag | |
|Pn=AT.EXCMAX |Exception maximum time of the current tag | |
|Pn=AT.ARCHIVING |Archiving flag of the current tag | |
|Pn=PRESSING |Compression flag of the current tag | |
|Pn=AT.FILTERCODE |Filter code of the current tag | |
|Pn=AT.RES |Resolution code of the current tag | |
|Pn=PDEV |Compression deviation of the current tag | |
|Pn=PMIN |Compression minimum time of the current tag | |
|Pn=PMAX |Compression maximum of the current tag | |
|Pn=AT.TOTALCODE |Total code of the current tag | |
|Pn=AT.CONVERS |Conversion factor of the current tag | |
|Pn=AT.CREATIONDATE |Creation date of the current tag | |
|Pn=AT.CHANGEDATE |Change date of the current tag | |
|Pn=AT.CREATOR |Creator of the current tag |Max. 12 characters |
|Pn=AT.CHANGER |Changer of the current tag |Max. 12 characters |
|Pn=AT.RECORDTYPE |Record type of the current tag | |
|Pn=AT.POINTNUMBER |Point ID of the current tag | |
|Pn=AT.DISPLAYDIGITS |Display digits after decimal point of the current tag| |
|Pn=AT.SOURCETAG |Source tag of the current tag |Max. 80 characters |
|Pn=AT.INSTRUMENTTAG |Instrument tag of the current tag |Max.32 characters |
|PI point change placeholders | | |
|Pn=AT.ATTRIBUTE |Changed attribute |Max. 32 characters |
|Pn=AT.NEWVALUE |New value |Max. 80 characters |
|Pn=AT.OLDVALUE |Old value |Max. 80 characters |
|PI batch database placeholders | | |
|Pn=BA.UNIT |Batch unit |Max. 80 characters |
|Pn=BA.BAID |Batch identification |Max. 80 characters |
|Pn=BA.PRID |Batch product identification |Max. 80 characters |
|Pn=BA.START |Batch start time | |
|Pn=BA.END |Batch end time | |
|Miscellaneous | | |
|Pn=”any-string” |Double quoted string (without white spaces) |No limit |
Note: Pn denotes Placeholder number (n) and will be used as P1 P2 P3 …
Example for Extended Descriptor referring to an SQL statement using 3 placeholders:
P1=TS P2=SS_I P3=AT.TAG
If the same placeholder definition is used multiple times in a query, it is possible to shorten the definition string, using a back reference.
Example: P1=TS P2=VL P3=P1
Note: Placeholders like SS_I or SS_C can also be used in SELECT statements, e.g. to serve as index. One should know that for tags of type real or integer containing valid data (value is not in error and therefore is not a digital state), SS_C would use the Digital State at position 0 of the System Digital State Table. See chapter “Evaluation of STATUS Field for All PI Data Types - Input from RDBMS”.
The default value for PI3 at position 0 is “??????????”
Mapping of Placeholder Data Types to SQL Data Types
Placeholders also represent values, which get stored into database fields. Those fields must be of certain data types. In order to assist database administrators in setting-up correct tables, here is a list of how the interface maps Placeholders to SQL data types.
When testing against different databases and ODBC drivers, we found that it is helpful to automatically support more than one data-type. For example integer fields in dBase appear as data type SQL_DOUBLE while most of the databases use SQL_INTEGER.
|Placeholder and PI Data Type |SQL Data Type |
|Snapshot placeholders | |
|VL for real tags |SQL_REAL |
| |If error ( SQL_FLOAT |
|VL for integer tags |SQL_INTEGER |
| |If error ( SQL_FLOAT |
|VL for digital tags |SQL_VARCHAR |
|VL for string tags |SQL_VARCHAR |
|SS_I for all PI data types points |SQL_INTEGER |
| |If error ( SQL_FLOAT |
|SS_C for all PI data types points |SQL_VARCHAR |
|TS,ST,LET,LST for all PI data types points |SQL_TIMESTAMP |
|PI point database placeholders | |
|AT.TAG, AT.DESCRIPTOR, AT.EXDESC, AT.ENGUNITS, |SQL_VARCHAR |
|AT.POINTTYPE , AT.POINTSOURCE, AT.CREATOR , AT.CHANGER, | |
|AT.SOURCETAG, AT.INSTRUMENTTAG, AT.ATTRIBUTE, AT.NEWVALUE,| |
|AT.OLDVALUE, “any_string” | |
|AT_DIGSTARTCODE, AT_DIGNUMBER, AT_LOCATION1, AT_LOCATION2,|SQL_INTEGER |
|AT_LOCATION3, AT_LOCATION4, AT_LOCATION5, AT_SQUAREROOT, |If error ( SQL_FLOAT |
|AT_SCAN, AT_EXCMIN, AT_EXCMAX, AT_ARCHIVING, |If error ( SQL_DOUBLE |
|AT_COMPRESSING, AT_FILTERCODE, AT_RES, AT_COMPMIN, | |
|AT_COMPMAX, AT_TOTALCODE, AT_RECORDTYPE, AT_POINTNUMBER, | |
|AT_DISPLAYDIGITS, | |
|AT_TYPICALVALUE, AT_ZERO, AT_SPAN, AT_EXCDEV, AT_COMPDEV, |SQL_REAL |
|AT_CONVERS |If error ( SQL_FLOAT |
|PI batch database placeholders | |
|BA.UNIT |SQL_VARCHAR |
|BA.BAID, BA_PRID | |
|BA.START, BA.END |SQL_TIMESTAMP |
If the ODBC driver used supports Level 2 Conformance (actually the ODBC driver must support SQLDescribeParam(), which is a Level 2 function), the interface can query the ODBC driver for the SQL Data Type. In this case a wider range of conversions can be supported.
Timestamp Format
The time format used in various databases is not the same. ODBC drivers do the corresponding data type transformation and so the only mandatory rule is to use the appropriate timestamp format in the particular relational database.
Note: The interface expects the ‘full timestamp’ (date+time) to be read from the relational database.
The interface offers these time references to populate placeholders:
|Keyword |Time used |
|Input: | |
|TS |Timestamp from PI Snapshot of a particular point. |
| |(Due to the Exception Reporting mechanism in the interface it does not always correspond to |
| |the visible PI Snapshot) |
| | |
| |E.g. SELECT … WHERE PI_TIMESTAMP > ?; P1=TS : this allows scanning the relational database |
| |only for newly arrived values (rows) independent of the scan frequency. |
|LST |Last Scan-Time |
| |Scan Time is the time a Scan Class starts a new scanning period |
| | |
| |This can be used to limit the amount of data. |
| |Events are skipped if older timestamped data arrive in a RDB table. |
|ST |Time before query executions. |
| |A good example is to transfer future data from RDB. Only data before ST can be transferred to |
| |PI. |
|LET |Time when the previous query was finished (Last-Execution-Time) |
| | |
|Output: | |
|TS |Timestamp from current PI Snapshot of source tag |
|ST |At interface startup - ST=Snapshot Time |
| |From that time on - ST=event time |
When interacting with a relational database, some SQL commands (SELECT) may take a while when executing and some data transfer to PI can therefore be delayed. When the scan class is scheduled for a particular time and the real start time is delayed for more than 2 seconds (e.g. because of some long SQL operation preceded) the whole scan is skipped and an error message is written into the log-file.
The Interface offers ‘scheduled time’ (ST) which is used when a relational database does not have the timestamp available and the program should join some appropriate time to send to PI.
Last execution (LET) and last scheduled time (LST) would allow new data to be moved since the last scan. On Interface start-up both timestamps are preset with PI Snapshot Time.
Note: On interface startup LST and LET are preset with PI Snapshot timestamps.
Note: For Input Tags, TS will be taken from the internal Interface snapshot. This is not the same as the PI Snapshot since Exception Reporting runs on the interface side. If for example the value is stable for a long time, the PI Snapshot will not be updated with scanned data as long no exception occurs.
Not using the PI Server Snapshot timestamp but the Interface internal snapshot timestamp will avoid querying for the same data (from unchanged PI Server snapshot timestamp) multiple times as it would be for queries of type
“SELECT … WHERE Timestamp > ?” and P1=TS.
Note: All PI related timestamps are synchronized to the PI Server time.
Inputs to PI via the SELECT Clause
For passing values in direction to PI, it is necessary to have a SELECT statement defined. Data obtained by ODBC API calls are distributed to the relevant PI points according to the specified ‘distribution strategy’ (see chapter “Data acquisition strategies ”).
The SELECT statement can return NULL values for any column. The interface uses the following algorithm to handle this:
1. If timestamp is NULL, the scheduled time (ST) is used instead.
2. If status is NULL and the returned value is not NULL then the value is valid.
3. When both value and status are NULL the ‘No Data’ digital state is used to convey the information that the expected value is absent.
The Interface accepts integer and string data types for digital tags and status values.
When an integer is provided the interpretation is given by the PI-API:
4. If negative, this is an absolute state value.
5. If zero or positive, this is an offset into the defined range of digital states.
Example 3.1 – query integer data for Digital PI tag:
|SQL Statement |
|(file PI_DIGITAL1.SQL) |
|SELECT PI_TIMESTAMP,PI_VALUE_INT,PI_STATUS_INT FROM PI_DIGITAL1 WHERE PI_KEY_VALUE = ?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=”Key_1234” |1 |0 |0 |1 |0 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_DIGITAL1.SQL |Digital | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUE_INT |PI_STATUS_INT |PI_KEY_VALUE |
|Datetime |Smallint |Smallint |Varchar(50) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Number-Whole Number (MS |Number-Whole Number (MS |Text(50) |
|(MS Access) |Access) |Access) |(MS Access) |
If the value, that should be interpreted as status or value for a digital tag, is retrieved in the string form, the interface checks the Digital State Table and performs the substitution.
Example 3.2 – query string data for Digital PI tag:
|SQL Statement |
|(file PI_DIGITAL2.SQL) |
|SELECT PI_TIMESTAMP,PI_VALUE_STR,PI_STATUS_STR FROM PI_DIGITAL1 WHERE PI_KEY_VALUE = ?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=”Key_1234” |1 |0 |0 |1 |1 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_DIGITAL2.SQL |Digital | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUE_STR |PI_STATUS_STR |PI_KEY_VALUE |
|Datetime |Varchar (12) |Varchar (12) |Varchar(50) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Text(12) |Text(12) |Text(50) |
|(MS Access) |(MS Access) |(MS Access) |(MS Access) |
The SELECT statement usually returns a set of data records, which fulfil the desired search condition. When the relational database also provides a timestamp, then it is possible to setup a query like this:
SELECT TIMESTAMP, VALUE, STATUS FROM TABLE WHERE TIMESTAMP > ?; P1=LST
Normally we put all returned records into PI. Sometimes however, it is desirable to get only the first record from the result set and write it into PI. The distinction is made by the Location2 parameter.
|Location2 |Bulk option |
|0 |Only the first record is valid |
|1 |The interface tries to put all the returned data into PI |
Note: It is important to provide records with timestamps sorted in ascending order. Only in this case the PI System can support Exception Reporting and Compression.
Use SELECT … ORDER BY TIMESTAMP; if necessary.
An example for Location2=0 might be to request the maximum or minimum of a list of values with timestamps between 2 scan periods. The way to achieve this (in addition to Location2=0) is sorting the SELECT(ed) data in descending or ascending order.
Data acquisition strategies
To interpret records obtained by SELECT statement in a flexible way, we have defined different data acquisition strategies. The strategy can be selected by Location3 parameter of the PI tag.
|Location3 |Data acquisition strategy |
|0 |SQL query populates a Single PI Tag |
|> 0 |Selects Tag Group mode |
| | |
| |Location3 points to the column number of a multiple field query |
| |where the selected column contains data for this tag |
|-1 |Selects Tag Distribution mode |
| |The SQL statement must return a key to denote the particular point |
SQL SELECT Command for Retrieving Data for Single PI Tag
Fixed Positions of Columns Used in SELECT Statement
To get values from a relational database, which will be stored in one PI point, the following field sequence should be kept:
SELECT [Timestamp,] Value, Status FROM...
If provided, the Interface always expects the Timestamp field to be in the first position followed by the Value and Status columns. The Interface detects a Timestamp field by checking the field data type against SQL_TIMESTAMP. If a database does not support timestamps, (e.g. dBase IV, there are ways to convert other field types in the query. See also chapter Database specifics.
Valid combinations of Timestamp, Value and Status in a SELECT statement are:
6. SELECT Timestamp, Value, Status FROM...
7. SELECT Value, Status FROM...
Note: The Interface expects the Status column to be provided in form of a constant (zero) in the SELECT list when the database does not have any notion of the status code. (SQL allows using such a constant that is then interpreted as the PI status code).
E.g. SELECT Value, 0 FROM …
Note: The above stated rule applies to all PI data types.
Arbitrary Positions of Columns Used in SELECT Statement - Aliases
If the used ODBC driver supports aliases, e.g. if we can use the SELECT statement in the form:
SELCT SAMPLE_TIME AS PI_TIMESTAMP, SAMPLE_VALUE AS PI_VALUE …
there are following keywords defined to ‘re-name’ the columns used in the relational database:
PI_TIMESTAMP, PI_TAGNAME, PI_VALUE, PI_STATUS.
The Interface recognizes and assigns the fields appropriately. In this case there is no need to have fixed positions of column names in the SELECT statement.
Note: In debug mode, the Interface prints out the alias support information to the log file (if the ODBC driver supports aliases or not). (Debug level 1 ( /deb=1)
Example 3.3 – field name keywords for single tag:
|SQL Statement |
|(file PI_STRING2.SQL) |
|SELECT VALIDITY AS PI_STATUS, SCAN_TIME AS PI_TIMESTAMP, DESCRIPTION AS PI_VALUE FROM PI_STRING2 WHERE KEY_VALUE = ?;|
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=”Key_1234” |1 |0 |0 |1 |0 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_STRING2.SQL |String | | | | |
| | | | | | |
|RDB Table Design |
|SCAN_TIME |DESCRIPTION |VALIDITY |KEY_VALUE |
|Datetime |Varchar(1000) |Smallint |Varchar(50) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Text(255) |Number-Whole Number (MS |Text(50) |
|(MS Access) |(MS Access) |Access) |(MS Access) |
SQL SELECT Command for Retrieving Data for Tag Groups
The SELECT command can serve as a source of data for multiple PI Tags (a group of tags).
The filename that is stated in the Instrumenttag attribute is considered to be an unambiguous key that forms the group. This means that each member of the group points to the same SQL query file.
The tag that triggers the execution (the ‘master’ tag) should have in its Extended Descriptor corresponding placeholders defined which relate to the question marks in its SQL statement. It is not required that other group member tags contain the placeholder definitions.
The other criteria for building a group is to have Location3 0.
Note: Single PI tags can also share the same SQL file, but they are not member of a group if Location3 = 0.
Fixed Positions of Columns Used in SELECT Statement
All the tags in a group should be numbered according to the sequence of field names used in the SELECT command. These numbers are expected to be in Location3 parameter of each tag in the group. Furthermore, the ‘master’ tag has to have the Location3 parameter set to either 1 or 2, depending on whether the optional timestamp field is available or not.
When the above-described conditions are fulfilled, the result set returned after each execution is sorted into all group points. If a timestamp comes from a relational database, the ‘time’ field in the SELECT statement is expected to be in the first position.
The simplest example is:
SELECT TIMESTAMP, VALUE1, STATUS1, VALUE2, 0, …
The second status value is forced to be of value zero (or possibly any constant applicable for a particular point). This is the same zero-status-rule as described in chapter “SQL SELECT Command for Retrieving Data for Single PI Tag ”. The Location3 parameter of the group member tags is as follows:
Master Tag and Group members
|Tag |Instrument |Extended |Location2 |Location3 |Comment |
| |tag |Descriptor | | | |
|Master tag |Filename.SQL |P1=… |0 |1 |Master tag gets |
| | | |first row only |if no timestamp |first value, |
| | | | |field used |status |
| | | | | | |
| | | |1 |2 | |
| | | |Bulk read |if first field is | |
| | | | |timestamp | |
|Group |Filename.SQL | |Not evaluated |Field number of |All tags must |
|member(s) | | | |value field |refer to the same |
| | | | | |SQL statement |
Note: Those PI points, which have SQL statements defined in the Extended Descriptor (not in a file pointed to by Instrumenttag ) are expected to retrieve data only for themself, they do not have the ‘grouping’ feature.
Note: Since the above statement contains a timestamp field, the Location3 sequence is 2,4,6… otherwise it would be 1,3,5…
Arbitrary Positions of Columns Used in SELECT Statement - Aliases
A similar construction with ALIASES used in SELECT commands is applicable also in this strategy. The column names used in the RDB table can be re-named to the known keywords PI_TIMESTAMP, PI_TAGNAME, PI_VALUE, PI_STATUS.
Because we will have more PI_VALUEs and PI_STATUSes present in the SELECT statement, these should be numbered. The regularity should be obvious from the following example.
Example 3.4 – field alias names:
|SQL Statement |
|(file PI_STR_GR1.SQL) |
|SELECT PI_TIMESTAMP, PI_VALUE1, PI_VALUE3,PI_VALUE4, PI_STATUS1 ,PI_STATUS3,PI_STATUS4 FROM PI_STR_GROUP1 WHERE |
|PI_TIMESTAMP > ?; |
| |
|or |
|SELECT TST AS PI_TIMESTAMP, V1 AS PI_VALUE1, V2 AS PI_VALUE3, V3 AS PI_VALUE4, S1 AS PI_STATUS1 , S2 AS PI_STATUS3, |
|S3 AS PI_STATUS4 FROM PI_STR_GROUP1 WHERE PI_TIMESTAMP > ?; |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=TS |1 |1 |Point1 1 |1 |0 |
| | | |Point2 3 | | |
| | | |Point3 4 | | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_STR_GR1.SQL |String | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUEn |PI_STATUSn |
|Datetime |Varchar(50) |Varchar(50) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Text(50) |Text(50) |
|(MS Access) |(MS Access) |(MS Access) |
Numbers used in column names (PI_VALUE1,PI_STATUS1…) correspond to the numbers stated in Location3. The main difference to the numbering scheme used in the ‘fixed position strategy’ is that Value and Status are of the same number. The ‘master tag’ (the point that actually gets executed) is still recognized by having Location3 = 1.
SQL SELECT Command for Tag Distribution via Tagname Key
Fixed Positions of Columns Used in SELECT Statement
The second possibility how to get data for multiple PI points out of one result set is to have one field configured as an unambiguous key (e.g. name of a point). The SELECT command should be of the following form:
SELECT [Time], Tagname, Value, Status FROM Table WHERE Time > ?;
P1=LST
A result set will look like this:
[timestamp1,]tagname1, value1, status1
...
[timestampX,]tagnameX, valueX, statusX
...
Note: The corresponding sequence of columns used in the SELECT clause should be kept according to the example above. Brackets denote an optional column.
The query execution is controlled by one PI point that carries the SQL command – called distributor point. The distributor point and the target points should relate to the same Interface Location1, be of the same scan class Location4, and of the same PointSource, otherwise the Interface will drop the data.
Distributor Point and Target Point attributes
|Tag |Instrument |Extended |Location2 |Location3 |Location4 |
| |tag |Descriptor | | | |
|Distributor tag |Filename.SQL |P1=… |1 |-1 |n |
|Target tag | | |Not evaluated |Not evaluated |n |
|… | | |Not evaluated |Not evaluated |n |
Note: The difference between a master tag for Tag Groups and a distributor tag for Tag Distribution is that the second one is a management tag only (does not get any data from the query) while the master tag for Tag Groups is at the same time management tag and first member of the group.
Note: The name of the Distributor Point should not be listed in the result set. He gets only the number of rows retrieved from the relational database. This is for administration purposes.
/ALIAS ! changed behavior
Since names of variables in the RDB might not be exactly the same as in PI we support an optional keyword /ALIAS=rdbms_tagname or /ALIAS=”rdbms tagname”.
This allows mapping of PI points to rows retrieved from the relational database.
Please note that this switch is now case sensitive.
PI2 Tagname matching rules ! changed behavior
PI2 tagnames are always upper case. The Interface makes case insensitive comparison in order to reduce failure rates (user error). Experience with the previos interface version (1.x) showed that this is a source for hidden errors.
When using PI2 short names, they are internally evaluated in their delimited form e.g. XX:YYYYYY.ZZ
Spaces will be preserved as well. E.g. 'XX:YYYY .ZZ'
PI3 Tagname matching rules ! changed behavior
PI3 tagnames preserve the case. The default tagname comparison is now case insensitive. If a case sensitive test is required, the /ALIAS option can be used to force a case sensitive test.
Note: See Example 1.4
Arbitrary Positions of Columns Used in SELECT Statement - Aliases
Using aliases in a SELECT command containing a tagname field is also possible.
The SELECT … AS PI_TIMESTAMP…construction renames the column names. This allows the interface to recognize the meaning of a column via their name.
Note: Do not mismatch field name aliases with the /ALIAS keyword.
Example 3.5 – distributor strategy and alias field names:
|SQL Statement |
|(file PI_ALL_TYPES_DISTR1.SQL) |
|SELECT NAME AS PI_TAGNAME, VALUE AS PI_VALUE , STATUS AS PI_STATUS, DATE_TIME AS PI_TIMESTAMP FROM |
|PI_ALL_TYPES_DISTR1 WHERE NAME LIKE ‘Key_%’; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
| |All points | | |All points |All points |
|Distributor – |1 |1 |-1 |1 |0 |
|P1=”Key_1234” | | | | | |
|Target points - | |Target points – |Target points | | |
|/ALIAS=value retrieved | |Not evaluated |– Not evaluated | | |
|in NAME column | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_ALL_TYPES_DISTR1.SQL |Point1 Int32 | | | | |
| |Point2 Digital | | | | |
| |Point3 Int32 | | | | |
| |Point4 Float16 | | | | |
| |Point5 String | | | | |
| | | | | | |
|RDB Table Design |
|DATE_TIME |NAME |VALUE |STATUS |
|Datetime |Char(80) |Real |Real |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Text(80) |Text(255) |Text(12) |
|(MS Access) |(MS Access) |(MS Access) |(MS Access) |
Event based Input
Input points can be triggered on a time period basis as well as they can be event based (any time the PI snapshot of a trigger tag changes, an event is generated). To achieve this, the keyword /EVENT=tagname must be in the Extended Descriptor of the particular input tag. The SQL statement (usually SELECT) is executed each time the value of the ‘event tag’ changes.
The following example shows reading data from a relational database, triggered by sinusoid events.
Example 3.6 – event based input:
|SQL Statement |
|(file PI_STR_EVENT1.SQL) |
|SELECT PI_TIMESTAMP,PI_VALUE,PI_STATUS FROM PI_STRING_EVENT1; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|/EVENT=sinusoid |1 |0 |0 |Not evaluated |0 |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_STR_ |String | | | | |
|EVENT1.SQL | | | | | |
| | | | | | |
|RDB Table Design |
|PI_TIMESTAMP |PI_VALUE |PI_STATUS |
|Datetime |Varchar(1000) |Smallint |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Date/Time |Text(255) |Byte |
|(MS Access) |(MS Access) |(MS Access) |
Note: If no timestamp field is provided in the query, retrieved data will be stored in PI using the event timestamp rather than the query execution time.
A separate document called PlantSuite Lookup Utility is available that shows how this feature can be used to synchronize timestamps for use with the PlantSuite Rlink product.
Multistatement SQL Clause
The interface can handle execution of more than one SQL statement. Semicolons must be used to separate multiple statements.
In the example below we keep the most recent value of the sinusoid point in the relational database by inserting the snapshot value and deleting the previous record. Output is event based.
Example 3.7 – multi statement query:
|SQL Statement |
|(file PI_SIN_PER_SCAN1.SQL) |
|INSERT INTO PI_SIN_PER_SCAN1 (PI_TIMESTAMP,PI_VALUE,PI_STATUS) VALUES (?,?,?); |
|DELETE FROM PI_SIN_PER_SCAN1 WHERE PI_TIMESTAMP = 0 | |0 |O.K. |
|Digital State | |1 | |
Note: More data type conversions are supported for ODBC drivers with Level 2 Extensions. In such cases it is for example possible to write integer values as ASCII representation into a string field.
Data Output for REAL and String Points
Data mapping for real tags is very similar to integer tags. Of course the preferred data type for VL is Float.
|PI Value |VL |SS_I |SS_C |
| | | | |
| |Field Type Float |Field Type Integer or |Field Type |
| |(or Integer) |Float |String |
|Value not in error | |0 |O.K. |
| |() | | |
|Digital State |0 |1 | |
Note: More data type conversions are supported for ODBC drivers with Level 2 Extensions. In such cases it is for example possible to write float values as ASCII representation into a string field.
Global Variables
The Extended Descriptor has a 80-character length limitation (PI-API). One way to allow string parameters longer than 80 characters is to define global variables. A file containing definitions for all global variables is referenced as interface start-up parameter. The syntax for global variables is the same as for placeholders Pn, but starting with character ‘G ‘ (see chapter “SQL Placeholders”). The syntax used in global variable file is obvious from the next example:
Example 3.11 – Global variables
(referenced by keyword /global=d:\pipc\interfaces\rdbmspi_2.0\data\global.dat;
|SQL Statement |
|(file PI_SIN_VALUES_OUT2.SQL) |
|UPDATE PI_SIN_VALUES_OUT2 SET NAME1_TS=?,NAME2_TS=?,DSC3=?,NAME1=?,NAME1_DSC=?, |
|NAME1_ENG_UNITS=?,DSC1=?,NAME1_VL=?,NAME1_SS_C=?, DSC4=?,DSC5=?,DSC6=?,NAME2=?,NAME2_DSC=?, NAME2_ENG_UNITS=?, |
|DSC2=?,NAME2_VL=?,NAME2_SS_C=?; |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|/EXD=…path…\ |1 |0 |0 |2 |0 |
|pi_sin_values_out2.plh | | | | | |
|Content of the above stated | | | | | |
|file: | | | | | |
|P1=G7 P2=G13 P3=G3 P4=G4 P5=G5 | | | | | |
|P6=G6 P7=G1 P8=G8 P9=G9 P10=G1 | | | | | |
|P11=G2 P12=G3 P13=G10 P14=G11 | | | | | |
|P15=G12 P16=G2 P17=G14 P18=G15 | | | | | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_SIN_VALUES_ |Int16 | | | | |
|OUT2.SQL | | | | | |
| | | | | | |
|RDB Table Design |
|DSCn |NAMEn_TS |NAMEn_VL |NAMEn_SS_C |
|NAMEn | | | |
|NAMEn_DSC | | | |
|NAMEn_ENG_UNITS | | | |
|Char(50) |Datetime |Real |Char(12) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Text(50) |Date/Time |Number Single Precision (MS |Text(12) |
|(MS Access) |(MS Access) |Access) |(MS Access) |
| |
|Content of the global variables file |
|G1="Actual-Value" G2="of-the" G3="PI-point:" G4='sinusoid'/AT.TAG G5='sinusoid'/AT.DESCRIPTOR |
|G6='sinusoid'/AT.ENGUNITS G7='sinusoid'/TS G8='sinusoid'/VL G9='sinusoid'/SS_C G10='sinusoidu'/AT.TAG |
|G11='sinusoidu'/AT.DESCRIPTOR G12='sinusoidu'/AT.ENGUNITS G13='sinusoidu'/TS G14='sinusoidu'/VL G15='sinusoidu'/SS_C |
Data Mapping between PI and RDBMS
A single PI tag can only ‘historize’ value or status, but never both together in only one tag. Therefore we need to provide a method of mapping a given value / status pair into one type of information. PI System interfaces mostly apply the rule:
If the status of a value is “good”, store the value in the PI tag.
If the status of a value is other than “good”, store the status in the PI tag instead.
Note: Any requirement that goes beyond that needs more than one tag.
In the previous chapter we have learned that we always need to select a value field and a status field in the SQL query. The following section will explain how these two fields provide data for the PI tag.
Mapping of SELECT Data Types to PI Point Types – Data Input
Four types of input fields after a SELECT keyword can be validated for this interface:
Timestamp, Tagname, Value and Status fields.
To be able to evaluate those fields, the interface makes some considerations for their data types. The following table shows what combinations between PI point types and field data types are working. Tags that do not match those criteria are rejected by the interface. This does not mean that those tags cannot be serviced. It only means that additional explicit conversion might be required.
|Input field |SQL Data Type |PI Point Type |
|Timestamp |SQL_TIMESTAMP |All PI point types |
|Tagname |SQL_CHAR, SQL_VARCHAR, |All PI point types |
| |SQL_LONGVARCHAR | |
| |Real (R) |Integer(I) |Digital(D) |String(S) |
|Value |Approximate (floating points)|Casted to the |Casted to long |Casted to |Converted from |
| |data types |particular |integer |integer and |floating-point |
| |SQL_NUMERIC, SQL_DECIMAL, |floating-point | |interpreted as |to string. |
| |SQL_REAL , SQL_FLOAT, |type. | |pointer to | |
| |SQL_DOUBLE | | |Digital Set | |
| |Exact (integer) data types |Casted to the |Casted to the |Interpreted as |Converted from |
| |SQL_TINYINT, SQL_SMALLINT, |particular |particular |pointer to |integer to |
| |SQL_INTEGER, SQL_BIGINT, |floating-point |integer type |Digital Set |string. |
| |SQL_BIT |type. | | | |
| |Character data types |Converted from |Converted from |Checked against |Retrieved number|
| |SQL_CHAR, SQL_VARCHAR , |string to |string to long |Digital Set. |of bytes copied.|
| |SQL_LONGVARCHAR |double. The |integer and | | |
| | |double number is|casted to | | |
| | |after that |integer PI data | | |
| | |casted to |type. | | |
| | |particular | | | |
| | |floating-point | | | |
| | |PI type. | | | |
|Status |See chapter: Evaluation of STATUS Field for All PI Data Types - Input from RDBMS |
Note: The full conversion of all possible data types supported in ‘relational world’ to PI data types goes beyond the ability of this Interface. To allow additional conversions, use the convert function described below.
Explicit data type conversion is specified in terms of SQL data type definitions.
The ODBC syntax for the explicit data type conversion function does not restrict conversions. The validity of specific conversions of one data type to another data type will be determined by each driver-specific implementation. The driver will, as it translates the ODBC syntax into the native syntax, reject those conversions that, although legal in the ODBC syntax, are not supported by the data source. In this case, the error message will be forwarded to the interface log file and, of course, the tag will be rejected.
The format of the CONVERT function is:
CONVERT(value_exp, data_type)
The function returns the value specified by value_exp converted to the specified data_type, where data_type is one of the valid SQL data types.
Example:
{ fn CONVERT( { fn CURDATE() }, SQL_CHAR) }
converts the output of the CURDATE scalar function to a character string.
Because ODBC does not mandate a data type for return values from scalar functions as the functions are often data source–specific, applications should use the CONVERT scalar function whenever possible to force data type conversion.
Note: More information about the CONVERT function can be gained from the ODBC.HLP file which comes with Microsoft ODBC Data Manager.
Evaluation of STATUS Field for All PI Data Types - Input from RDBMS
In this version of the Interface presence of a status field is mandatory. Status field can be provided either in numeric or string format. For a numeric field, we only test against zero. For a string field evaluation is more complex and in order to verify this status we have to define 2 areas in the digital state table, one for successful states, another one for error or bad value states. The status areas in the digital state table are referenced via /succ1, /succ2, /bad1, /bad2 interface start-up parameters.
Note: In the following table the term used in the column SQL Data Type of Status Field denotes these SQL data types:
String SQL_CHAR, SQL_VARCHAR, SQL_LONGVARCHAR
Numeric SQL_NUMERIC, SQL_DECIMAL, SQL_REAL , SQL_FLOAT, SQL_DOUBLE, SQL_TINYINT, SQL_SMALLINT, SQL_INTEGER, SQL_BIGINT, SQL_BIT
The Table shows mapping of the status into a PI point:
|SQL Data Type of |Success |Bad |Not Found |Result for Tag |
|Status Field | | | | |
|String |String is between | | |Go and evaluate |
| |/succ1 and /succ2 | | |Value Field |
| | |String is between | | |
| | |/bad1 and /bad2 | |(the one which was |
| | | | |found) |
| | | |String was not |Bad Input |
| | | |found | |
|Numeric | | 0 | |Bad Input |
| |0 | | |Go and evaluate |
| | | | |Value Field |
|String | | |NULL |Go and evaluate |
| | | |(Status Field |Value Field |
| | | |contains NULL) | |
|Numeric | | |NULL |Go and evaluate |
| | | | |Value Field |
Note: Searches in success and bad areas are case insensitive!
Storage of PI POINT Database Changes
The Interface can keep track of changes made in the PI Point Database. The concept is similar to regular output point handling. The difference is that the managing (output) point is not triggered by a change of PI snapshot data but by point attribute modification. Two methods of recording are available.
Short Form
The first one offers the following attributes to be stored in the relational database:
TagName, AttributeName, ChangeDateTime, Changer, NewValue, OldValue.
The following placeholders are at hand:
AT.TAG, AT.ATTRIBUTE, AT.CHANGEDATE, AT.CHANGER, AT.NEWVALUE, AT.OLDVALUE to form appropriate INSERT statement.
Long Form
The second method allows any combination of point database placeholders (AT.___ see chapter SQL Placeholders ). The difference is that the first method inserts a new record into the RDBMS table for any attribute changed and that the second method is supposed to use a complex INSERT query that stores all needed information in one record.
Both methods require a ‘managing’ point that carries the SQL statement (INSERT) and the definition of placeholders.
The following examples show how these ‘managing’ points have to be set.
Example 5.1 – Short Form:
|SQL Statement |
|(file PIPT_CHG_RED.SQL) |
|INSERT INTO PI_PT_CHG_REDUCED_FORM ( PI_CHANGEDATE, PI_TAG, PI_ATTRIBUTE, PI_CHANGER, PI_NEWVALUE,PI_OLDVALUE)VALUES |
|(?,?,?,?,?,?); |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|/EXD=…path…\ pipt_chg_red.plh |1 |0 |0 |-1 |0 |
|Content of the above stated file:| | | | | |
|P1=AT.CHANGEDATE | | | |This is relevant| |
|P2=AT.TAG | | | |and says that it| |
|P3=AT.ATTRIBUTE | | | |is going to be | |
|P4=AT.CHANGER | | | |the managing | |
|P5=AT.NEWVALUE | | | |point for method| |
|P6=AT.OLDVALUE | | | |one recording | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PIPT_CHG_RED.SQL |Int32 | | | | |
| |
|RDB Table Design |
|PI_TAG |PI_ATTRIBUTE |PI_CHANGEDATE |PI_CHANGER |
|Varchar(80) |Varchar(32) |Datetime |Varchar(12) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Text(80) |Text(32) |Date/Time |Text(12) |
|(MS Access) |(MS Access) |(MS Access) |(MS Access) |
|PI_NEWVALUE |PI_OLDVALUE |
|Varchar(80) |Varchar(80) |
|(MS SQL Server) |(MS SQL Server) |
|Text(80) (MS Access) |Text(80) (MS Access) |
Example 5.2 – Long Form:
|SQL Statement |
|(file PIPT_CHG_FULL.SQL) |
|INSERT INTO PIPT_FULLCHANGE (CREATIONDATE, CHANGEDATE, TAG, DESCRIPTOR, EXDESC, ENGUNITS, TYPICALVALUE, ZERO, |
|SPAN, DIGSTARTCODE, DIGNUMBER, POINTTYPE, POINTSOURCE, LOCATION1, LOCATION2, LOCATION3, LOCATION4, |
|LOCATION5, SQUAREROOT, SCAN, EXCDEV, EXCMIN, EXCMAX, ARCHIVING, COMPRESSING, FILTERCODE, RES, COMPDEV, |
|COMPMIN, COMPMAX, TOTALCODE, CONVERS, CREATOR, CHANGER, RECORDTYPE, POINTNUMBER, POINTID, DISPLAYDIGITS, |
|SOURCETAG, INSTRUMENTTAG) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?); |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|/EXD=…path…\ pipt_chg_full.plh |1 |0 |0 |-2 |0 |
|Content of the above stated file: | | | | | |
| | | | |This is | |
|P1=AT.CREATIONDATE P2=AT.CHANGEDATE| | | |relevant and | |
|P3=AT.TAG | | | |says that it is| |
|P4=AT.DESCRIPTOR | | | |going to be the| |
|P5=AT.EXDESC P6=AT.ENGUNITS | | | |managing point | |
|P7=AT.TYPICALVALUE | | | |for method two | |
|P8=AT.ZERO P9=AT.SPAN | | | |recording | |
|P10=AT.DIGSTARTCODE | | | | | |
|P11=AT.DIGNUMBER | | | | | |
|P12=AT.POINTTYPE | | | | | |
|P13=AT.POINTSOURCE | | | | | |
|P14=AT.LOCATION1 | | | | | |
|P15=AT.LOCATION2 | | | | | |
|P16=AT.LOCATION3 | | | | | |
|P17=AT.LOCATION4 | | | | | |
|P18=AT.LOCATION5 | | | | | |
|P19=AT.SQUAREROOT P20=AT.SCAN | | | | | |
|P21=AT.EXCDEV P22=AT.EXCMIN | | | | | |
|P23=AT.EXCMAX | | | | | |
|P24=AT.ARCHIVING | | | | | |
|P25=PRESSING | | | | | |
|P26=AT.FILTERCODE P27=AT.RES | | | | | |
|P28=PDEV P29=PMAX| | | | | |
|P30=PMIN | | | | | |
|P31=AT.TOTALCODE P32=AT.CONVERS| | | | | |
|P33=AT.CREATOR P34=AT.CHANGER| | | | | |
|P35=AT.RECORDTYPE | | | | | |
|P36=AT.POINTNUMBER | | | | | |
|P37=AT.DISPLAYDIGITS | | | | | |
|P38=AT.SOURCETAG | | | | | |
|P39=AT.INSTRUMENTTAG | | | | | |
|Instrumenttag |Pointtype | | | | |
|PIPT_CHG_FULL.SQL |Int32 | | | | |
| |
|RDB Table Design |
|PI_TAG PI_DESCRIPTOR |PI_CREATIONDATE PI_CHANGEDATE|PI_ZERO PI_SPAN |PI_DIGSTARTCODEPI_DIGNUMBER |
|PI_EXDESC PI_POINTTYPE | |PI_TYPICALVALUE PI_EXCDEV |PI_LOCATION1-5 PI_SQAREROOT |
|PI_POINTSOURCE PI_SOURCETAG | |PI_COMPDEV |PI_SCAN PI_ARCHIVING |
|PI_INSTRUMENTTAG | | |PI_COMPRESSING PI_FILTERCODE |
|PI_ENGUNITS PI_CREATOR | | |PI_RES PI_COMPMIN PI_COMPMAX |
|PI_CHANGER | | |PI_TOTALCODE PI_CONVERS |
| | | |PI_SCAN PI_SQAREROOT |
| | | |PI_RECORDTYPE PI_POINTNUMBER |
| | | |PI_DISPLAYDIGITS |
|Varchar(80) |Datetime |Real |Varchar(12) |
|(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |(MS SQL Server) |
|Text(80) |Date/Time (MS Access) |Single Precision |Text(12) |
|(MS Access) | |(MS Access) |(MS Access) |
PI Batch Database Output
The Interface keeps track of PI Batch database new events and stores this data in the relational database. The ‘managing’ point that carries the SQL statement (INSERT) is recognized by the presence of any of the ‘PI batch database placeholders’ (see chapter SQL Placeholders). The PI Batch database is scanned on a time basis (‘managing’ point should be input point) and newly arrived data is inserted into the relational database. The ‘managing’ point itself gets the number of new batches exported since last scan.
Example 6.0 – Batch Export:
|SQL Statement |
|(file PI_BATCH1.SQL) |
|INSERT INTO PI_BATCH1 (PI_BATCH_START,PI_BATCH_END, PI_BATCH_UNIT, PI_BATCH_ID, PI_BATCH_PRODUCT_ID) VALUES |
|(?,?,?,?,?); |
| |
|Relevant PI Point Attributes |
|Extended Descriptor |Location1 |Location2 |Location3 |Location4 |Location5 |
|P1=BA.START P2=BA.END |1 |0 |0 |1 |0 |
|P3=BA.UNIT P4=BA.BAID | | | | | |
|P5=BA.PRID | | | | | |
| | | | | | |
|Instrumenttag |Pointtype | | | | |
|PI_BATCH1.SQL |Float32 | | | | |
| | | | | | |
|RDB Table Design |
|PI_BATCH_UNIT PI_BATCH_ID |PI_BATCH_START PI_BATCH_END |
|PI_BATCH_PRODUCT_ID | |
|Varchar(80) |Datetime |
|(MS SQL Server) |(MS SQL Server) |
|Text(80) |Date/Time |
|(MS Access) |(MS Access) |
Database specifics
Although ODBC is a standard, there are implementation differences of ODBC drivers compared to each other. Also the databases which are behind ODBC have differences in functionality, supported data-types, limits, SQL syntax and so on.
The following section will describe some of the differences, which are important for this Interface, but this list is by far not complete.
However, many of the ODBC driver specifics are handled automatically via the Interface itself.
Oracle 7.0
Statement Limitation
We have found that there is a limitation in the number of statements, which can be open at the same time. Although it is possible to increase this limit via the keyword OPEN_CURSORS configured in the file INIT.ORA, we could not get more than 100 statements to work. INIT.ORA is located at the server side of the ORACLE database.
Since this Interface normally uses one SQL statement per tag, not more than 100 tags per Interface process can be serviced. Exceptions are grouping and multiple statements per tag.
The only way we found to support more than 100 tags was to use Tag Groups or Tag Distribution.
ODBC drivers used:
Oracle72 1.13.0500
Visegenic 32-bit Oracle driver 2.00.0000
We did not see this problem in higher versions of ORACLE.
Oracle RDB
TOP 10
If it is required to limit the number of returned rows, e.g. in order to reduce CPU load, there is a possibility to formulate the SQL query accordingly. Unfortunately this option is database specific.
RDB allows the following statement which returns up to 10 records:
SELECT * FROM test LIMIT TO 10 ROWS;
In the form
SELECT * FROM test LIMIT TO 10 ROWS WHERE timestamp > ?;
And having P1=TS, this query allows smooth history recovery when the Interface was down for a longer time.
Note: Number 10 is of course only an example value.
Oracle 8.0
TOP 10
Similar to the example for RDB (see above), the statement to select a maximum of 10 records looks as follows:
SELECT * FROM test WHERE ROWNUM=10;
dBase III, dBase IV
Timestamps
dBase does not support the datatype TIMESTAMP. There is a workaround to at least output data with timestamps. The target field for the timestamps must be of type TEXT(20). The Interface and the ODBC driver will automatically convert the Placeholder from type SQL_TIMESTAMP into SQL_VARCHAR.
The other way around is not that simple. Actually it is not possible to read a timestamp from a TEXT field because the required ODBC function CONVERT does not support SQL_VARCHAR into SQL_TIMESTAMP.
However a workaround is possible:
Use the dBase database as linked table from within MS Access. Now we can use the MS Access ODBC driver which unfortunately also does not support CONVERT from SQL_VARCHAR into SQL_TIMESTAMP. But there is a function available called CDATE.
The input query looks like below and works for timestamps in dBase TEXT fields of the format “DD-MMM-YY hh:mm:ss” :
SELECT cdate(CTIMESTAMP), RVALUE, ISTATUS
FROM RECEIVE
WHERE cdate(CTIMESTAMP) > ?;
ODBC drivers used:
Microsoft dBase Driver 3.50.360200
Microsoft Access Driver 3.50.360200
Login
dBase works without Username and Password. In order to get access from the Interface a dummy username and password must be used in the startup line.
/user_odbc=dummy /pass_odbc=dummy
Multi-User Access
The Microsoft dBase ODBC driver seems to lock the dBase table. That means no other application can access the table at the same time.
We did not look for workarounds, other than the MS Access linked table.
MS Access
Login
MS Access can also be configured not to use (by default) Username and Password. In order to get access from the Interface a dummy username and password must be used in the startup line.
/user_odbc=dummy /pass_odbc=dummy
TOP 10
Similar to the example for RDB (see above), the statement to select a maximum of 10 records looks as follows:
SELECT TOP 10 * FROM test;
MS SQL Server 6.5
TIMESTAMP
There are 2 data types for timestamps available in SQL Server, TIMESTAMP and DATETIME. For this Interface only DATETIME works in combination with Placeholders of type SQL_TIMESTAMP.
MS SQL Server 7.0
TOP 10
Similar to the example for RDB (see above), the statement to select a maximum of 10 records looks as follows:
SELECT TOP 10 * FROM test;
CA Ingres II
Software Development Kit
The ODBC driver which comes with the Ingres II Software Development Kit does not work for this interface. This is due to the fact that the ODBC driver expects the statements being re-prepared before getting executed while the same ODBC driver reports SQL_CB_CLOSE when checking SQL_CURSOR_COMMIT_BEHAVIOR. That means that the ODBC driver is inconsistant with the ODBC specification (a bug?).
Other ODBC drivers for Ingres II may still work.
PI Point Configuration
A PI point corresponds to a single parameter in the ‘interfaced system’. For example, a counter, a set point, a process variable, and the high and low control limits would each serve as a separate point in the PI System database. Each of these points must be defined and configured individually using the PIDIFF or PICONFIG utilities.
For more information regarding point configuration, see the Data Archive (DA) section of the PI System Manuals.
The following point attributes are relevant to configure tags used with the RDBMS to PI Interface.
Point Name
The point name is free, according to the normal PI point naming conventions.
Extended Descriptor
The Extended Descriptor is mainly used to define placeholders (See chapter “SQL Placeholders ”).
Possible keywords used in the Extended Descriptor are described in the following table:
|Keyword |Example |Remark |
|/ALIAS |/ALIAS=Level321_in |Used when DISTRIBUTOR strategy takes place.|
| |or |This allows having different point names in|
| |/ALIAS=”Tag123 Alias” (support white spaces) |RDB and in PI. |
|/EXD |/EXD=D:\PIPC\DATA\PLACEHOLDER1.DEF |Allows getting over the 80-character limit |
| | |of the Extended Descriptor. (Suitable for |
| | |tags with more placeholders.) |
|/SQL |/SQL=”SELECT PI_VALUE,PI_STATUS FROM PI_TABLE WHERE |Suitable for shorter SQL statements. Allows|
| |PI_TIMESTAMP >?;” P1=TS |the on-line statement changes |
| | |(sign-up-for-updates). The actual statement|
| | |should be double-quoted and the ending |
| | |semicolon is mandatory. |
|/TRIG |/EVENT=sinusoid |Used for event driven input points. Each |
| |or |time the particular (in our case sinusoid) |
|/EVENT |/EVENT=”ProcessorTime 1” |point changes, the actual point is |
| | |processed. |
Note: Each keyword has to be in uppercase.
Point Source
All points defined in the PI Point Database for use with this interface must share a common point source. That means that the interface is able to write retrieved data from a relational database only to tags of this point source.
The point source is a single character, for example R (Relational Database). The point source must be defined in the point source library before point configuration (PI 2.x only).
Note: See in addition Location1 parameter!
Point Type
The Interface supports the following PI point types:
|Point Type |How It Is Used |
|Digital |Used for points whose value can only be one of several discrete states. These states are |
| |predefined in a particular state set (PI 3.x). |
|Int16 |15-bit unsigned integers (0-32767) |
|Int32 |32-bit signed integers (-2147450880 – 2147483647) |
|Float16 |Scaled floating-point values. The accuracy is one part in 32767 |
|Float32 |Single-precision floating-point values. |
|Float64 |Double-precision floating point values. |
|String |Stores string data of up to 1000 characters. |
|Blob |Binary large object – stores any type of binary data up to 1000 bytes. |
Scan flag
This is usually ON for all points of this interface. If you edit this Point Attribute to OFF, then the tag is OFFLINE (no values are exchanged for this tag). In this case and if the interface is running while editing, the tag will automatically get Shutdown event.
Instrument Tag
This is the filename containing the SQL statement(s).
The file location is defined in a startup parameter using /SQL= directory path.
The SQL file is only evaluated on startup and on tag change events. This is to avoid getting a new SQL statement but having wrong placeholders left in the tag configuration. If a SQL statement needs to be changed during interface operation, we recommend to provide a new SQL file and edit the Instrument Tag field (and may be the Extended Descriptor if required) to point to the new SQL file. The point edit will cause the Interface to re-evaluate this tag, including the new SQL statement.
SourceTag
This attribute is used only for output points. The Interface decides if it is an output tag when this attribute is not empty. The source tag is the PI tag from which the output gets the values for sending to the RDBMS. The interface tag will receive a copy of all data sent to the relational database. In case of an ODBC call failure the Interface tag will receive the status BAD OUTPUT.
Location 1
This is the number of the Interface process that shall collect data for this tag. The interface can run multiple times and so distribute CPU power to groups of tags. E.g. it is possible to provide a number of tags with a separate process, e.g. to scan those tags with high frequency.
The parameter required here should match the parameter /IN=… in the startup file.
Note: It is even possible to start multiple Interface processes on different PI Nodes. But then a separate Software License for the Interface is required per PI Node.
Location 2
The second location parameter specifies if all rows of data returned by ODBC API calls in form of a result set should be written into the PI database or if only the first row is valid.
For Tag Groups the master tag will define this option for all tags related. It is not possible to read only the first record for one tag and all records for another tag.
Note: For Tagname Distribution Strategy the Interface tries to get all retrieved rows to the PI database regardless of the Location2 value.
|Location2 |Data acquisition strategy |
|0 |Only the first record is valid (except the Tagname Distribution Strategy) |
|1 |The interface tries to put all the returned data into PI |
Note: It is recommended to provide a timestamp field when trying to send all retrieved rows from RDB to PI.
Location 3
The third location parameter specifies the ‘Distribution Strategy’ - how retrieved data from the relational database should be interpreted and written into PI:
|Location3 |Data acquisition strategy |
|0 |SQL query populates a single tag |
|> 0 |Location3 represents the column number of a multiple field query |
|< 0 |SQL statement returns the tagname |
Location 4
Specifies the scan class used.
The startup command procedure has parameters like
/f=00:00:30 or /f=00:00:30,00:00:10
This parameter specifies cycle time for scan groups. The first /f= relates to scan group 1 etc.
This parameter should be set to zero if event controlled reading is configured (see Extended Descriptor – keyword /EVENT or /TRIG).
The Interface has the facility to keep track of PI Point Database changes. There are two approaches of recording the PIPOINT changes in RDB.
The first one (short form) requires a table in RDB which contains fields to store PI Tag Attribute changes. The interface then records the following information:
TAG_NAME, ATTRIBUTE_NAME, CHANGE_DATE, CHANGER, OLD_VALUE and NEW_VALUE
The ‘managing ‘ tag that holds the appropriate SQL (INSERT) query must have Location4=-1
The second approach (long form) of storing PI Point Database changes is designed in a way that whenever a PI Tag is modified, the complete Tag configuration after the change is recorded. This method can make use of all possible PI Point Database related placeholders (see chapter SQL Placeholders). The corresponding ‘managing’ tag must have Location4=-2
|Location4 |Kind of evaluation |
|Positive number |Index to the position of /f= startup parameter keyword (scan class number) |
|0 |Event based output and event based input |
|-1 |Specifies the ‘managing’ tag for recording PIPOINT changes in the ‘short’ |
| |form |
|-2 |Specifies the ‘managing’ tag for recording PIPOINT changes in the ‘full’ |
| |form. |
Location 5
Decides about Exception Reporting feature.
|Location5 |Behavior |
|0 |The Interface does the Exception Reporting in the standard way. For points of|
| |type int16, int32, float16, float32, float64 and digital, out of order data |
| |are supported but existing archive values cannot be replaced. |
| | |
| |For PI points of type String and Blob data transfer as follows: |
| |Exception Deviation=0 means sending each value to PI. |
| |Exception Deviation0 means sending only changes to PI. |
|1 |For points of type int16, int32, float16, float32, float64 and digital, the |
| |Interface gives up the Exception Reporting. Each retrieved value is sent to |
| |PI (pisn_putsnapshotx()). Existing archive values are replaced (overwritten).|
| |Typical usage for Lab Data. |
| | |
| |For PI points of type String and Blob data transfer as follows: |
| |Sending only changes to PI, independent of Exception Deviation setting. |
Note: PI Snapshot values (current value of a tag) cannot be replaced.
Note: If a string or blob tag is in status 0, each query result is forwarded to PI. The archive storage is then dependant on the compression settings.
Exception Reporting
Standard PI usage (if Location5=0), see PI System Manual I.
Not used if Location5=1.
Zero, Span
Standard PI usage, see PI System Manual I.
The Interface does not use the following tag attributes
1. Square root code
2. Totalization code
3. Conversion factor
Performance Point
You can configure a performance point to monitor the data transfer rate.
The performance point measurement is a number, which specifies the time (in seconds) that is required to update the scan list. The scan list is specified using the interface startup script and the point attributes Location4 and Location1.
To configure a performance point, create a tag with the point source specified for the interface. The point type should be float or integer. The extended descriptor should say
PERFORMANCE_POINT
all in uppercase. Location4 specifies the scan class of interest. Location1 determines the instance of the interface.
IO Rate Tags
An IO Rate Tag can be configured to receive 10 minute averages of the total number of post-exception events that are sent to PI every minute by the interface (events/minute). The IO Rate will appear to be zero for the first 10 minutes that the interface is running. The “post-exception” restriction means that a new value that is received by the interface does not count toward increasing the IO Rate unless the value passes exception (i.e. ExcDev or ExcMax is exceeded).
To implement an IO Rate Tag, perform the following steps:
1. Create an IO Rate Tag. To create an IO Rate tag called “sy:mod001”, for example, create a file called iorates.dif similar to:
|iorates.dif |
|@table pipoint |
|@ptclas classic |
|@stype keyword |
|@mode create,t |
|tag, sy:mod001 |
|Descriptor, "TI505 Rate Tag #1" |
|zero, 0 |
|span, 100 |
|TypicalValue, 50 |
|PointType, float32 |
|PointSource, L |
|Compressing, 0 |
|@ends |
Adjust the Zero, Span, and TypicalValue as appropriate and then type the command:
C:\pi\adm\piconfig < iorates.dif
to create the IO Rate Tag.
2. Create a file called iorates.dat in the PIHOME\dat directory. The PIHOME directory is defined either by the PIPCSHARE entry or the PHOME entry in the pipc.ini file, which is located in the \Winnt directory. If both are specified, the PIPCSHARE entry takes precedence.
Since the PIHOME directory is typically C:\PROGRAM FILES\PIPC, the full name of the iorates.dat file will typically be C:\Program Files\PIPC\dat\iorates.dat.
Add a line in the iorates.dat file of the form:
sy:mod001, x
where x corresponds to the event counter specified by the /ec=x flag in the startup command file of the interface. x can be any number between 1 and 34 or between 51 and 200, inclusive. To specify additional rate counters for additional copies of the interface, create additional tags and additional entries in the iorates.dat file as appropriate. The event counter, /ec=x, should be unique for each copy of the interface.
3. Set the /ec=x flag on the startup command file of the interface to match the event counter in the iorates.dat file.
4. The interface must be stopped and restarted in order for the IO Rate tag to take effect. IORates will not be written to the tag until 10 minutes after the interface is started.
The 10-minute rate averages (in events/minute) can be monitored, for example, on a ProcessBook trend.
Interface Files
The startup files for the interface reside in the PI/interfaces/RDBMSPI directory. Listed below are the files required for starting up the interface.
Directory PI\interfaces\RDBMSPI
|RDBMSPI.BAT |Start interface as console application |
|RDBMSPI.EXE |Interface |
|GLOBAL.DAT |Example file for globale variables |
| | |
Directory PI\interfaces\RDBMSPI\EXAMPLES\…
|Filename.SQL |Example SQL Query files |
|Filename.dif |Example piconfig file |
|Filename.xls |Example PI-SMT file |
|Filename.mdb |Example Database |
|Filename.qrt |SQL query to create example table |
|Readme_dbname.doc |Example Description |
| | |
| | |
Example of a start-up script:
rdbmspi.exe /in=2 /ps=x /ec=21 /f=00:01:10,00:00:10 /f=00:10:20,00:01:00 /SQL=d:\pi\interfaces\rdbmspi\SQL /global=d:\pi\interfaces\rdbmspi\data\global.dat /host=localhost:5450 /output=d:\pi\interfaces\rdbmspi\logs\rdbmspi.out /dsn=MS_ACCESS /succ1=200 /succ2=211 /bad1=212 /bad2=220 /deb=1 /USER_PI=pi_user /USER_ODBC=odbc_user /tf=x_tf1 /tf=x_tf2 /stopstat=shutdown
Time Zone and Daylight Savings
Until version 2.08, the interface did not explicitly support RDBMS, PI-API Node and PI Server being in different Time Zones and/or different DST settings. Only due to the fact that Extended PI-API functions do this automatically, those parts of the interface that used Extended PI-API functions did timestamp conversions correctly while other parts did no timestamp conversion.
Now, the interface was updated to use Extended PI-API functions wherever available and do additional conversion when no equivalent Extended PI-API functions is available. As a side effect, this also means that the interface will not run anymore with PI-API versions before 1.3.x!
ODBC however has no standard way of telling about the Time Zone and/or DST setting of the connected RDBMS. Therefore no timestamp conversion can be applied here. The only method can be to use the same settings for the Interface node as for the RDBMS system.
For various reasons many RDBMS systems are running without DST setting. A valid scenario could then be to set the Interface node also to DST off, while the PI Server and other PI Clients can use their own independent setting. The PI-API node running the interface will take care on the timestamp conversion, means finally that the PI Archive gets UTC timestamps.
Note:
Parameters and Fields affected by interface specific timestamp transformation are for example AT.CHANGEDATE, AT.CREATIONDATE, BA.START, BA.END.
Installation
Before installing the interface, PI-API 1.3.x.x must be installed (if not already present due to other PI Client Software).
To install the interface, you must define a point source code and modify the PI startup and shutdown command files on that node. You can optionally specify one performance monitoring tag as shown in section "Performance Point".
In the PI2 System Point Source Editor, set up the following values:
Point Source Code: R
Point Source Descriptor: RDBMS via ODBC
|Location Minimum |1 |0 |-100 |0 |0 |
|Location Maximum |10 |1 |100 |10 |0 |
For PI3 Systems no Location Parameter Limits need to be specified.
1. The standard PI API installation also creates an environment for PI Client login management. Please verify that a file PILOGIN.INI exists and modify it according to your needs. See section PILOGIN.INI in chapter “Startup” for more details.
2. Insert the installation disk
3. For Intel based Windows NT: Run the file PSrdbms_2.14i.exe.
4. For ALPHA based Windows NT: Run the file PSrdbms_2.14a.exe.
5. The Interface Setup files will be extracted to a temporary directory, e.g c:\temp\disk1
6. Run the setup.exe file from the temporary directory.
Example:
c:> a:PSrdbms_2.14i.exe
c:\temp\disk1\setup.exe
7. Configure the Startup file RDBMSPI.BAT to your needs and follow instructions in the following sections.
Updating the Interface from a previous version
For an update of the RDBMS to PI Interface make a backup of all interface files at PIPC/interfaces/RDBMSPI directory.
For example:
c:> md /PIPC/interfaces/RDBMSPI/RDBMSPI_old
c:> copy /PIPC/interfaces/RDBMSPI/*.* /PIPC/interfaces/RDBMSPI/ RDBMSPI _old/*.*
If the interface was installed as service, also remove the service using rdbmspi –remove.
8. If not already installed, update your PI-API with version 1.3.x.
The new interface version does not run with previous PI-API versions.
Now insert the RDBMS to PI interface diskette in the floppy disk drive and
9. For Intel based Windows NT: Run the file PSrdbms_2.14i.exe.
10. For ALPHA based Windows NT: Run the file PSrdbms_2.14a.exe.
11. The Interface Setup files will be extracted to a temporary directory, e.g c:\temp\disk1
12. Run the setup.exe file from the temporary directory.
Example:
c:> a:PSrdbms_2.14i.exe
c:\temp\disk1\setup.exe
13. Perform all configuration steps (see following sections) and use your existing configuration files from the backup.
Startup
Command line switches for the RDBMS to PI interface
|Parameter |Meaning |
|/ps=R |Point Source |
|/in=1 |Interface number (Location1) |
|/ec=21 |Event counter. |
|/f=00:00:01 |Scan Class frequency, optional use of multiple /f=… |
|or | |
|/f=00:01:00,00:00:05 | |
|/tf=tagname |I/O rate tag per scan |
|/stopstat |If the /stopstat flag is present on the startup command line, then |
|or |the digital state I/O Timeout will be written to each PI Point when |
|/stopstat=digstate |the interface is stopped. |
|Optional |If /stopstat=digstate is present on the command line, then the |
|default: see right |digital state digstate will be written to each PI Point when the |
| |interface is stopped. |
|/host= localhost:5450 |PI Home Node |
|/global=c:\…\global.dat |Name and location of the global variable file |
|/sql=c:\…\dat |Location of the SQL statement files |
|/output=c:\…\rdbmspi.log |Interface specific log file name and location |
|/dsn=dsn_name |Data Source Name |
|/user_pi=username(PI) |Account for PI access (case sensitive evaluation) |
|/pass_pi=password(PI) |Password for PI Server access (case sensitive evaluation) |
|/user_odbc=username(odbc) |Username for RDBMS access (case sensitive evaluation) |
|/pass_odbc=password(odbc) |Password for RDBMS access (case sensitive evaluation) |
|/test=tagname |Test mode for tag=tagname |
|/NO_INPUT_ERROR |Suppress writing “I/O Timeout” and “Bad Input” to all tags. |
|/deb=0 |Debug level |
|/succ1=100 |Begin of range in (system) digital state table which contains |
| |“successful” status strings |
|/succ2=120 |End of range in (system) digital state table which contains |
| |“successful” status strings |
|/bad1=121 |Begin of range in (system) digital state table which contains “Bad |
| |Input” status strings |
|/bad2=130 |End of range in (system) digital state table which contains “Bad |
| |Input” status strings |
|/recovery=shutdown |Recovery flag. Possibilities are SHUTDOWN and TS |
|/recovery_time=”*-8 hours” |In conjunction with the recovery flag determines the maximum amount |
| |of time for going back into the archive. The ‘time’ syntax is in PI |
| |Time Format. |
| |(See the Data Archive Manual for more information on the time string|
| |format.) |
|/sr=20 |Sign-Up-For-Updates rate in seconds. |
|/skip_time=10 |Maximum delay time in seconds for scan class. Default value=2 |
Detailed explanation for command line parameters
/ps = R
Specifies the point source of the tags the Interface will operate on.
/in = 1
The Interface number is specified here. It corresponds to the Location 1 of a tag.
/ec = 21
For additional configuration information regarding IORates, see the section entitled “IO Rate Tags.”
/f = HH:MM:SS,hh:mm:ss
The scan frequency for different scan classes. There is no set limit on the number of scan classes .
Example: You can specify 3 scan frequencies by inserting:
. . . /f=00:00:03 /f=00:00:05 /f=00:00:30
which defines 3 scan classes 1, 2 and 3. The tag having the value of 1 in location 4 will belong to scan class 1, being scanned every 3 seconds.
Each instance of the /f flag on the command line defines a scan class for the interface. Each scan class is, in turn, associated with a scanning frequency.
The time between scans for a class is given in terms of hours (HH), minutes (MM), and seconds (SS). The scans can be scheduled to occur at a particular offset with respect to the hour in terms of hours (hh), minutes (mm), and seconds (ss). The first occurrence of the /f flag on the command line defines the first scan class of the interface, the second occurrence defines the second scan class, and so on. All PI Points that have location4 set to 1 will receive values at the frequency defined by the first scan class. Similarly, all points that have location4 set to 2 will receive values at the frequency specified by the second scan class, and so on.
Two scan classes are defined in the following example:
/f=00:01:00,00:00:05 /f=00:00:07
The first scan class has a scanning frequency of 1 minute with an offset of 5 seconds with respect to the minute. This means that if the interface is started at 12:03:06, the first scan will be at 12:03:10, the second scan will be at 12:03:15, and so on. The second scan class has a scanning frequency of 7 seconds. Since there is no offset given, the absolute times at which the scans are made is not defined.
The definition of a scan class does not guarantee that the associated points will be scanned at the given frequency. If the interface is under a large load, some scans may occur late or be skipped entirely.
If a scan is skipped, the interface will print out an error message.
Note : To specify an offset of 0, use the frequency instead.
E.g. /f=01:00:00,01:00:00 defines a scan at every full hour.
/f=01:00:00,00:00:00 is the same as /f=01:00:00 and starts the scan without offset
/deb=0
The interface is able to print additional information into the interface specific logfile, depending on the debug level used. The amount of log information increases with the debug number as follows:
|Debug level |Output |
|0 |Normal interface operation, Print out error messages causing data loss. |
| |Examples: connection loss, tag rejected due to wrong configuration,… |
|1 |Additional information about interface operation |
| |Examples: tag information, startup parameter defaults, ODBC related info |
|2 |Information about errors which will be handled by the interface and will not cause data |
| |loss but points out problems |
| |examples: protocol errors which were recovered by retry function |
|3 |display original data and data types (raw values received by ODBC fetch calls) per tag per|
| |scan; this helps to trace data type conversion |
|4 |prints out the actual values before sending to PI (values used in pisn_putsnapshotx and |
| |pisn_sendexceptions functions) per tag per scan |
|5 |prints out each subroutine the program runs through; |
| |(only for onsite test purposes) |
/dsn=DSNname
Data source name created via ODBC administrator utility, found on the Windows NT Control Panel. We do only support Machine Data-sources and preferably System Data-sources. When the Interface is installed as Service, only System Data-sources will work!
For more information on how to setup a DSN, please see your 32bit ODBC Administrator Help file (Control Panel) or the documentation of your ODBC driver.
/global = c:\pi\sql\globals.dat
Points to the filename that contains the definition of global variables. (See chapter “Global Variables”)
/host = computer_name
This is the hostname of the computer running the PI Server.
/NO_INPUT_ERROR
A valid and efficiant scenario uses the PI timestamp to limit the number of retrieved data and avoids to query for data already read.
Example: SELECT time,value,0 WHERE time>?; P1=TS
? will be updated in run-time with the latest timestamp already read. Now if the interface runs into a communication problem, it writes "I/O Timeout" to all tags. The latest timestamp will be now the one of "I/O Timeout". The next query will miss all values between the last real data timestamp and the "I/O Timeout" timestamp.
Solution:
We introduced a new switch for the command line: /NO_INPUT_ERROR. This suppresses writing IO_TIMEOUT and BAD_INPUT on all input tags.
/output = c:\pi\sql\rdbmspi.log
The Interface generates output messages into the given log-file.
In order to not overwrite a previous log-file after interface restart, the interface renames the previous log-file to log-file.log;n, where n is a continuous number.
The System Manager is responsible to purge old log-files.
/user_pi
User name for the PI connection. PI Interfaces usually logs in as piadmin. This switch allows logging in as different PI user.
/pass_pi
The password for piadmin account (default) or for the account set by /user_pi parameter.
As an alternative, you can wait for the logon prompt and enter the password when it runs in console mode. This will avoid the situation to have a password stored in a startup BAT file, readable for every user on this machine. The password has to be entered only once. On all future startups the interface will remember the password from an encrypted file. This encrypted file has the name of the output file (defined by /output= startup parameter) and the file extension is PI_PWD. The file gets stored in the same directory.
Example: …/in=2… /output=d:\pi\interfaces\rdbmspi\data\rdbmspi.log …
The encrypted password is stored in: d:\pi\interfaces\rdbmspi\data\rdbmspi.PI_PWD
When running the interface as service, the interface must be called at least one time in interactive mode, in order to specify the password after the prompt and let the interface create the encrypted file. The file can be deleted any time and the interface will prompt for a new password after next interactive startup.
Note: The interface will fail when started as service and not having a valid passwordfile (or having /pass_pi=password).
Note: In order to achieve a connection with the PI Server, the file PILOGIN.INI must contain a reference to that PI Server. This can be done in the easiest way via ProcessBook Connection Dialog. If ProcessBook is not installed, please see section PILOGIN.INI at the end of this chapter.
/user_odbc
User name for the ODBC connection. This parameter is required. Databases like MS Access or dBase may not always have usernames set up. In this case a dummy username must be used, e.g /user_odbc=dummy.
/pass_odbc
Password for the ODBC connection. If this parameter is omitted, the standard ODBC connect dialogue prompts the user for the user name and password. This will avoid the situation to have a password stored in a startup BAT file (readable for every user on this machine). The password has to be entered only once. On all future startups the interface will remember the password from an encrypted file. This encrypted file has the name of the output file (defined by /output= startup parameter) and the file extension is ODBC_PWD. The file gets stored in the same directory.
Example: …/in=2… /output=d:\pi\interfaces\rdbmspi\data\rdbmspi.log …
Encrypted password is stored in: d:\pi\interfaces\rdbmspi\data\rdbmspi.ODBC_PWD
When running the interface as service, the interface must be called at least one time in interactive mode, in order to specify the password after the prompt and let the interface create the encrypted file. The file can be deleted any time and the interface will prompt for a new password after next interactive startup.
Note: The interface will fail when started as service and not having a valid passwordfile (or having /pass_odbc=password).
Databases like MS Access or dBase may not always have security set up. In this case a dummy username and password must be used, e.g /pass_odbc=dummy.
/recovery
This start-up flag determines how to handle output points after Shutdown or I/O Timeout digital states. After start-up, the Interface checks these two digital states and goes into the archive for events of the sourcetag. The SQL statement is then executed for each event retrieved from the PI archive.
/recovery_time
Used in conjunction with the /recovery flag and sets the maximum time to go back into the archive.
Following table describes the behavior.
|/recovery= |Behavior |
|SHUTDOWN |If Shutdown or I/O Timeout digital states are encountered, the Interface goes back into the PI |
| |archive either starting at /recovery_time (when Shutdown or I/O Timeout timestamp is older than |
| |/recovery_time ) or starts the recovery at the time of the last event (Shutdown or I/O Timeout). |
| |Note: If no Shutdown or I/O Timeout event is encountered, no recovery takes place. |
|TS |Starts the recovery from /recovery_time= time or from the last snapshot of the output point if |
| |this is later. |
|NO_REC |Default value. No recovery takes place. The /recovery_time keyword is ignored. |
Note: Remember, the output point contains a copy of all data successfully downloaded from the source point. The current snapshot of the output point therefore marks the last downloaded value. See also section “Limitations and future enhancements
/sql = c:\pi\sql
Points to the destination where ASCII files with SQL statements reside.
/sr
Sets the Sign-for-Updates scan period in seconds. The Interface checks each defined period of time if some changes in point database were made. Default value is 35 sec.
/stopstat
If the /stopstat flag is present on the startup command line, then the digital state I/O Timeout will be written to each PI Point when the interface is stopped.
If /stopstat=digstate is present on the command line, then the digital state digstate will be written to each PI Point when the interface is stopped. digstate can be any digital state that is defined in the PI 2 digital state table or the PI 3 system digital state table, depending upon whether the home node is a PI 2 or PI 3 system.
If neither /stopstat nor /stopstat=digstate is specified on the command line, then the default is not to write digital states to each input point when the interface is stopped.
Note: digstate can contain spaces but then the string must be embedded in double quotes.
Example:
/stopstat=”I/O Timeout”
The /stopstat=shutdown option is the recommended flag.
/skip_time
Timeout in seconds for scan class. If the actual execution time for a scan class gets over the predicted scheduled time of more than skip_time seconds the scan execution is skipped. Default value is two seconds.
/succ1
Defines the start of successful area filled with strings representing a positive status.
/succ2
Defines the end of successful area filled with strings representing a positive status.
/bad1
Defines the start of bad value area filled with strings representing a negative status.
/bad2
Defines the end of bad value area filled with strings representing a negative status.
/test = “tagname”
The “/test” mode allows to test only one query at a time. For example, “/test=tag1”means connect, run the configuration for “tag1”, print results and then exit.
No data is stored in RDB and PI.
/tf=tagname
Each scan class can get its own I/O rate tag. The order in the startup line will lay down the tagname to the related scan class.
After finalized scan of a certain scan class the number of successfully executed queries will be stored into the related I/O rate tag.
Example: You can specify 2 scan frequencies by inserting:
. . . /f=00:00:03 /f=00:00:05 /tf=tagname1 /tf=tagname2
Scan class 1 will have I/O rate tag tagname1 and scan class 2 will own tagname2.
Note: In the current version we only support tagnames without spaces related to their occurrence in the startup line.
Startup as console application
To start the interface, run the rdbmspi.bat batch file from the Interface directory.
To stop the interface type Ctrl^C or Ctrl^Break in the Interface’s standard output window.
To automate startup, you may add rdbmspi.bat to c:\pi\adm\pisitetsart.bat.
Startup as Windows NT Service
NT Service Installation
Rdbmspi Interface is installed as NT service by running the command rdbmspi -install from the Interface directory. To remove the service from the Windows NT Registry Database type
rdbmspi –remove.
Note: The service is created to start in manual mode without any dependency!
NT Service Start and Stop
The Interface’s directory should contain two files with the same name but different file extensions: rdbmspi .exe and rdbmspi.bat. (The batch file contains all the startup parameters used when the executable starts.) Once the service is created (via rdbmspi –install command) it can be started, stopped, paused and resumed from the Services dialog box in the Control Panel or directly by typing rdbmspi –start or rdbmspi –stop.
To determine if the Interface is running as a service, look at the Control Panel - Services dialog box. The status says “Started” if the Interface is running; the status is blank if it is not running. Alternatively you can list all of the NT services that are currently running by typing net start at the command prompt and see if rdbmspi is listed there.
Note: Services are automatically stopped on system shutdown but are not stopped when a user logs off from an NT console. To manually stop all running PI services, run the pisrvstop.bat command file from the pi\adm directory. This command file calls pisrvsitestop.bat, which stops the interfaces and other site-specific programs that have been configured to run as NT services.
Any site-specific services that are expected to be associated with PI should have startup and shutdown commands added to the pisrvsitestart.bat and the pisrvsitestop.bat command files.
By default the PI services are set in manual mode. This means that the services must be told to start and stop using the pisrvstart.bat and pisrvstop.bat commands. To allow the PI System to start automatically on reboot, services can be set to automatic from the Services dialog box in the Control Panel.
Optional Switches
rdbmspi –install –depend “tcpip pinetmgr” -auto
-depend
Starts the interface after activation of specified services. Possibilities are:
-depend “tcpip pinetmgr”
-depend “bufserv tcpip”
-depend tcpip
-auto
Automatically starts the service after reboot of the operating.
PILOGIN.INI
The PILOGIN.INI file contains configuration and preference settings for the PI Server connections with PI-Client Software (e.g. ProcessBook or Interface). The file generally resides in the PIPC\DAT directory. ProcessBook SETUP.EXE creates this file with default settings. As you use PI-ProcessBook and the Connections feature, this file is modified.
Withot having ProcessBook installed, the file must be manually modified using any ASCII text editor.
The settings used in the examples are samples and not necessarily the default values.
The Services section of the PILOGIN.INI identifies the server type:
PI1=PI
The Defaults section specifies the default server and user ID:
PIServer=tomato
PI1USER=DLeod
The PINodeIdentifiers section of PILogin.ini maps the PI Server names to codes which are stored in the ProcessBook files. ProcessBook uses these codes instead of the actual node names in order to save space and facilitate changing server names. You usually make changes to this section through the Connections command under the PI-ProcessBook File menu. Here is an example of this section:
[PINodeIdentifiers]
;PI#=Servername, NodeID, Port#
PI1=casaba,11111,545
PI2=orange,85776,545
PI3=localhost,62085,5450
PI4=olive,2153,5450
PI5=206.79.198.232,41369,5450
The first parameter after the equal sign is the PI Server name. This is usually a TCP/IP node name, but it can also be a TCP/IP address as shown on the last line.
The second parameter is the node identifier, which is stored with each tag name used in a ProcessBook file. This parameter is relevant for ProcessBook but not for the RDBMSPI Interface.
The third parameter is the TCP port number. Port 545 is used for PI Servers on OpenVMS. Port 5450 is used for PI Servers on Windows NT and UNIX.
Example (minimum) PILOGIN.INI File:
[Services]
PI1=PI
[PINODEIDENTIFIERS]
PI1=alpha22,48872,5450
[DEFAULTS]
PIServer=alpha1
PI1USER=piadmin
Shutdown
You can manually stop the interface by pressing Ctrl^C or Ctrl^Break, when used in interactive mode. When the interface runs as service, you can stop it via the Control Panel or by entering the command: rdbmspi -stop
On a Windows NT PI3 Home Node, include the interface stop procedure in pisrvsitestop.bat. It should look like this:
echo Stopping Site Specific PI System Services...
..\interfaces\rmp_sk\rmp_sk –stop
..\interfaces\random\random –stop
..\interfaces\enraf\rdbmspi –stop
:theend
Error and information messages
You can find messages from the interface in two places:
1. The standard PI message log called PIPC.LOG (located in \PIPC\DAT directory) accepts general information end error messages like used point source, number of points handled,...
2. The second file is interface-specific (/output=filename) and has all important information printed out by the interface. The amount of information is dependent on the debug level used (/deb=1-5).
Note: Errors related to tag values will also be reported in giving the tag a BAD INPUT state. This happens, if the status of a value derived from the RDB is BAD. Points can also get a status of I/O Timeout if the Interface detects connection problems.
Hints for PI System Manager
8. When using the option to query a complete time series for a tag, the query must solve the problem that the value/timestamp pairs arrive ordered by timestamp.
Otherwise the interface cannot perform exception reporting and the piserver cannot do compression.
• Reconnect attempts are modified to be more general. In the past we have learned that only a few ODBC drivers report detailed error codes for networking problems. This was required for RDBMSPI Version 1.28 to reconnect (codes 08xxx (network problems) and xxTxx (timeout) were required). As a result the interface reported an error (typically S1000) but did not reconnect (because S1000 is a general error).
Now, on any serious error we test the connection with the RDB and do a reconnect if necessary. This new behavior was tested with Oracle and SQL Server.
A common problem was that for backup reasons the RDBMS was shutdown periodically. Since the interface then reports a connection problem (“I/O Timeout” gets written to all interface tags), queries with reference to previous timestamps being read only queried back in time to the shutdown event. As a result data were missing. In such a situation the startup flag /NO_INPUT_ERROR can help.
9. If the field size is less than required for the current value to be passed, the interface prints an error message into the log file but continues to try on the next event with the value valid at that time.
E.g. if the field length of a character field is 2 and the interface tries to store “ON” and “OFF” values, “ON” will work, “OFF” will generate an error.
10. If the query contains a constant in the select range and the constant is a string, the ODBC driver transforms this string to capital letters.
E.g. SELECT timestamp,0,’No Sample’ WHERE …
the “NO SAMPLE” arrives in the PI part of the interface. Searches in the Bad and Good area are now case insensitive to address this problem.
11. Error messages in the logfile are only displayed on first occurance. To avoid logfiles with many same messages, we report only when the error is resolved.
12. The minimum field size for digital state output is 12 characters. Some ODBC drivers also require one additional character for the string termination byte (NULL). In this case we need a minimum field size of 13 characters.
13. SELECT statements using LST or LET may not get any data if the clocks of PI System computer and RDBMS System are not synchronised. That is because LST and LET are filled from the Interface but compared to RDBMS timestamps.
Interface Test Environment
The Interface Version 1.28 was tested using the following software versions:
Operating System Windows NT 4.0 Workstation and Server, SP1 and SP3
C-Compiler MS Visual C/C++ 5.0
PI 3.1 on NT (Intel), Build 2.71 and 2.81
PI-ODBC-PC 1.1.0.0 (12/27/96)
UNIINT 2.23, 2.25, 2.31
Oracle - RDB Database 6.1 (Alpha)
Oracle ODBC Driver for RDB 2.10.1100
MS SQL Server 6.5
MS SQL Server ODBC Driver 2.65.0240
Oracle 7
MS Oracle ODBC Driver 2.00.00.6325
Oracle72 1.13.05.00
Visegenic 32-bit Oracle driver 2.00.00.00
dBase III, dBase IV
Microsoft dBase Driver 3.50.360200
MS Access 95, MS Access 97
Microsoft Access Driver 3.50.360200
The Interface Version 2.08 was tested using the following software versions:
|Intel platform only |
|Operating System |Windows NT 4.0 Workstation SP4 |
|C-Compiler |MS Visual C/C++ 6.0 SP2 |
|PI |3.2 - SR1 Build 357.8 |
| |PI-API 1.2.3.4 and PI-API 1.3.0.0 |
|RDBMS |ODBC driver |
|MS SQL 6.50.201 |3.60.03.19 |
|(ROBUSTNESS tests only) | |
|MS SQL 7.00.623 |3.70.06.23 |
|ORACLE 7.1 |1.13.05.00 (Oracle72) |
| |2.00.00.00 |
| |(Visigenic 32-bit Oracle driver) |
| |2.00.00.6325 |
| |(Microsoft ODBC Driver for Oracle) |
|ORACLE 8.0 8.0.5.0.0 |8.00.06.00 |
The Interface Version 2.14 was tested using the following software versions:
|Intel platform only |
|Operating System |Windows NT 4.0 Workstation SP5 |
|C-Compiler |MS Visual C/C++ 6.0 SP2 |
|PI |3.2 - SR1 Build 357.8 |
| |PI-API 1.3.1.3 |
|RDBMS |ODBC driver |
|MS SQL 7.0 (07.00.0623) |3.70.06.90 |
|ORACLE 8 (8.0.5.0.0) |8.00.05.00 (Oracle) |
|DB2 (05.02.0000) |05.02.0000 |
More Examples
Insert or Update
A usual request is to keep a copy of current snapshot values in a relational table. The problem is that the query must decide between INSERT and UPDATE. If the PI tag already has a record in that table, UPDATE is required. If there is no record for this tag yet, INSERT has to be used. Unfortunately SQL language does not provide if…then mechanisms. The usual solution is to write a stored procedure to get this flexibility. Luckily we also support stored procedures.
But there is also a SQL solution to this problem:
Use a second table with the same fields defined as the target table. This “dummy” table has to have exactly one record with “dymmy” data. To do an INSERT or UPDATE, use a RIGHT JOIN between the two tables as in the example below:
Table1 has 2 fields, V1 and P1
Table2 has 2 fields, V1 and P1
Table2 has exactly one record with data.
Query INSERT_UPDATE:
UPDATE Tbl1 RIGHT JOIN Tbl2 ON Tbl1.P1 = Tbl2.P1 SET Tbl1.P1 = Tbl2.P1, Tbl1.V1 = tbl2.V1;
How does it work?
The trick itself lies in the RIGHT JOIN. RIGHT JOIN means, include all records (exactly one) from the table on the right and those fields from the lefthand table where the fields are equal.
If we would only do a SELECT then we would either see one record with data in all fields (the UPDATE situation) or we see only data from the dummy table and no data BUT EMPTY FIELDS from the target table (INSERT situation).
If we now do an UPDATE on this RIGHT JOIN, in the case of empty fields, the RIGHT JOIN writes back a new record.
In the other case the RIGHT JOIN just generates an UPDATE.
For use with our RDBMS Interface, we need to split the problem into more than one query
UPDATE of dummy table;
UPDATE .... RIGHT JOIN...;
This is no problem because the RDBMS Interface supports multiple statements in one SQL file.
Limitations and future enhancements
Due to the wide range of RDB’s and the complexity of this Interface, functionality is not fixed. Also ODBC is still improved from Microsoft which can be seen looking at frequent updates of ODBC Driver Manager, ODBC API and ODBC library. During development we found a number of limitations which could not be solved in this version:
❑ The Interface requires ODBC Driver Manager 3.x since we had to use more enhanced functions for error detection and robustness (RDB reconnect). The ODBC Driver Manager update comes free with a number Microsoft products or can be downloaded from the Microsoft Web Page.
Version 1.28 of the RDBMSPI Interface (available on request from TechSupport) can still be used for ODBC 2.x Driver Manager.
❑ PI point changes will only be recorded during interface operation. Information about a PI point, which is modified during interface down-time, will get lost.
Enhancements planned for future versions:
❑ Automate login configuration via connection dialog
❑ Overcome 80 char limit of Extended Descriptor
❑ Support all pointclass attributes for placeholder AT.ATTRIBUTE
❑ Tag configuration and ODBC testtool
❑ Scan based output
❑ Output of aggregate data (piar_calculation instead of sourcetag)
Revision History
|Date |Author |Comments |
|24-Jan-97 |BB, MF |50 % draft |
|20-Mar-97 |BB, MF |Preliminary Manual |
|10-Dec-97 |BB |Release Manual Version 1.21 |
|18-Sep-98 |BB |More details added |
| | |related to RDBMS Interface Version 1.27 |
|06-Nov-98 |BB |Release Manual Version 1.28 |
|29-Nov-98 |MF |50 % draft of Version 2 |
|01-Feb-99 |BB, MF | |
|25-Feb-99 |MH,MF |Examples tested and corrected |
|04-Jun-99 |BB |Release Version 2.08 |
|24-Mar-00 |MF |Testplan 2.14 (SQL Server 7.0,Oracle8, DB2 Ver.5) |
|16-May-00 |BB |Manual Update for Release 2.14 |
| | | |
| | | |
| | | |
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.