Hello ARIS Community,
As someone new to ARIS process modeling, I'm reaching out for insights on best practices regarding severity assignment in semantic checks. In our organization, a significant majority (90%) of semantic checks are classified as "errors," with a small number labeled as "warnings." We currently do not utilize the "note" category. This approach seems overly stringent, and I'm contemplating initiating a re-evaluation of these severity classifications.
I'm eager to learn from the experiences of other companies. Could you please share your best practices or guidelines for assigning severity in semantic checks? Your input would be greatly appreciated.
Thank you
Hi,
If you use the standard semantic checks, then don't worry a lot ) these standard scripts are predefined for very initial/specific rules. But every project/enterprise develops its own conventions and rules. Therefore, it is necessary to develop your own checks that cover the logic of your modeling convention.
IMHO, most of the 'out-of-box' semantic checks are good as examples but not for use in 'production' (maybe one exception is the BPMN model check).
Hi Alexander,
thanks for your insight.
you are right, over time we did develop some customized semantic checks and I think that the severity just seem too strict. For instance, if there is a "description" missing the semantic check returns "error" - in my opinion this can be recategorized into "warning" or even "note" as we formally do not require to have a description on each BPMN task/activity - it is optional but the semantic check in ARIS is stricter. So what I am looking for is some best practices on semantic check rules.
Can you elaborate your last point? What do you mean by most of the 'out-of-box' semantic checks are good as examples but not for use in 'production' (maybe one exception is the BPMN model check)
Thanks.
What I mean is that all those semantic check scripts have the same syntax as standard reports (I'm sure you are aware of this).
This code could be reused in standard reports (you can copy it), but instead of starting some checks model by model, you can select, e.g., a folder and take all diagrams you want to check from all subfolders at once. And, yes, in this case, we start the report script, not the semantic check script.
The standard output of running (standard) semantic checks on many models at once tends to become ugly. Technically you could also take the Sem-Check code and develop some nicer output for the Semantic checks you need - and still run it as Semantic check with highlighting in the models.
The decision on the severity is entirely up to you. Many people argue, that no task should be without a description. The name of the task (usually "imperative + Object name", e. g. "assess damage") is just too short in order to not being ambiguous. Only in the description the process owner will properly describe what is supposed to happen. So with the descriptions being an important part of the process owner's responsibility they are often considered as mandatory before a process is approved.
Another idea is this: The development of your processes could go through a process of several "Quality gates". In this case you might provide different semantic check profiles for each quality gate with more or less checks or severities.