To run and use Scheer PAS Process Mining you need a database to store the collected statistical and tracing data. Scheer PAS Process Mining can be connected to an a MySQL, an Oracle, or an SQLServer database. To set up the analytic database, you need a valid installation of one of these three.
Multiexcerpt include |
---|
MultiExcerptName | mysql_restriction |
---|
PageWithExcerpt | Process Mining Installation Guide |
---|
|
Infonote |
---|
When preparing the database installation, please consider the following: The Process Mining database contains analytic data for statistical analysis and can reach a considerable size. But, in contrast to databases that store application data, it does not need to be highly available and fulfill strict recovery requirements. To prevent that the Process Mining database reaches an unbearable size, you should - use a separate database to store the Process Mining data.
- run this separate database within a less strict recovery mode (e.g. NOARCHIVELOG in Oracle).
This may lead to the fact that a point in time recovery is not possible, but this will not cause problems as lost data can be simply reloaded from the BRIDGE logs. |
The analytical database is composed of two parts:
- The first part contains stored procedures and working tables used during the collection of data.
- The second part contains the front-end tables which are queried by the Process Mining services to present the data in a user interface.
Setting up
an a MySQL Database
To use an a MySQL database, you need to
- create an empty schema.
- grant the Process Mining user
SELECT
privileges on table mysql.proc.
Code Block |
---|
GRANT SELECT ON 'mysql'.'proc' TO '<user>'@'<mysql server>'; |
All tables and procedures will be created by the TrxLogsETL service at startup.
Info |
---|
title | Also consider the following hints |
---|
|
- To use a MySQL 5.5 or 5.6 database, you need to adjust the database settings and set
innodb-large-prefix to true. Refer to the documentation of innodb-large-prefix for more information on this.
|
note |
note Note |
---|
|
With MySQL you cannot use Bulk Upload. |
|
All tables and procedures will be created by the analytics-etl-service at startup.
Setting up an SQLServer Database
To use an SQLServer database, you only need to create an empty schema. All tables and procedures will be created by the TrxLogsETL analytics-etl-service at startup.If you want to use Bulk Upload with SQLServer, you need to provide the database user that is associated with the TrxLogsETL with permissions to administer bulk operations, like e.g.
codeGRANT ADMINISTER BULK OPERATIONS TO <db_user> |
Setting up an Oracle Database
To use an Oracle database with ETL by inserts, you need to create an empty schema. All tables and procedures will be created by the TrxLogsETL analytics-etl-service at startup.
Setup for ETL by Bulk Upload
On your Oracle server, create a work directory that will contain the logs to be loaded into the database. The collector services TryLogsCollector and TrxLogsETL will use this directory.Open an SQLPlus command shell and connect to the Oracle database with an administration account.
The Oracle database user need to be granted the following minimum privileges for Process Mining to work:
Code Block |
---|
GRANT CONNECT, RESOURCE |
Make the directory created in step 1 known to Oracle. Create a reference using:
Code Block |
---|
|
CREATE DIRECTORY oracleWork AS '<path to folder>';
GRANT READ ON DIRECTORY oracleWork dashboardprocess mining services will use> |
; Note |
---|
The name of the work directory must be oracleWork. |
All tables and procedures will be created by the TrxLogsETL service at startup.
Access to sys.dbms_crypto
For both setup scenarios the database administrator needs to grant access to package sys.dbms_crypto to the Process Mining database user:
Code Block |
---|
|
GRANT EXECUTE ON SYS.DBMS_CRYPTO TO <dashboard_user>; <database user the process mining services will use>; |