Versions Compared
Key
- This line was added.
- This line was removed.
- Formatting was changed.
To run and use Scheer PAS Process AnalyticsMining you need a database to store the collected statistical and tracing data. The Scheer PAS Process AnalyticsMining can be connected to an MySQL, an Oracle, or an SQLServer database. To set up the analytic database, you need a valid installation of one of these three.
Note |
---|
When preparing the database installation, please consider the following: The Process Analytics Mining database contains analytic data for statistical analysis and can reach a considerable size. But, in contrast to databases that store application data, it does not need to be highly available and fulfill strict recovery requirements.
This may lead to the fact that a point in time recovery is not possible, but this will not cause problems as lost data can be simply reloaded from the BRIDGE logs. |
The analytic analytical database is composed of two parts:
- The first part contains stored procedures and working tables used during the collection of data.
- The second part contains the front-end tables which are queried by the Process Analytics Mining services to present the data in a user interface.
Setting up an MySQL Database
To use an MySQL database, you need to
- create an empty schema.
- grant the Process Analytics Mining user
SELECT
privileges on tablemysql.proc.
Code Block |
---|
GRANT SELECT ON 'mysql'.'proc' TO '<user>'@'<mysql server>'; |
All tables and procedures will be created by the TrxLogsETL service at startup.
Note |
---|
Make sure that you use MyISAM as your storage-engine with MySQL to avoid limitations of the MySQL key length (see Troubleshooting the Process Analytics Mining Installation). |
Note |
---|
With MySQL you cannot use Bulk Upload. |
Note |
---|
If you want to install the Process Analytics Mining database on a MySQL Database using Amazon Web Services RDS, you may get the following error:
To solve this problem, enable |
Setting up an SQLServer Database
To use an SQLServer database, you only need to create an empty schema. All tables and procedures will be created by the TrxLogsETL service at startup.
If you want to use Bulk Upload with SQLServer, you need to provide the database user that is associated with the TrxLogsETL with permissions to administer bulk operations, like e.g.
Code Block |
---|
GRANT ADMINISTER BULK OPERATIONS TO <db_user> |
Setting up an Oracle Database
To use an Oracle database with ETL by inserts, you need to create an empty schema. All tables and procedures will be created by the TrxLogsETL service at startup.
Setup for ETL by Bulk Upload
- On your Oracle server, create a work directory that will contain the logs to be loaded into the database. The collector services TryLogsCollector and TrxLogsETL will use this directory.
Open an SQLPlus command shell and connect to the Oracle database with an administration account.
Make the directory created in step 1 known to Oracle. Create a reference using:
Code Block language none CREATE DIRECTORY oracleWork AS '<path to folder>'; GRANT READ ON DIRECTORY oracleWork TO <database user the dashboard services will use>;
Note The name of the work directory must be oracleWork.
All tables and procedures will be created by the TrxLogsETL service at startup.
Access to sys.dbms_crypto
For both setup scenarios the database administrator needs to grant access to package sys.dbms_crypto to the Process Analytics Mining database user:
Code Block | ||
---|---|---|
| ||
GRANT EXECUTE ON SYS.DBMS_CRYPTO TO <dashboard_user>; |
Otp | ||||
---|---|---|---|---|
|
Rde |
---|
Rp |
---|