ARIS Process Mining in action - Live Demo
Register
ARIS Super User Group
Register

View all
Profile picture for user martina.zacher

Hello ARIS Community,

As someone new to ARIS process modeling, I'm reaching out for insights on best practices regarding severity assignment in semantic checks. In our organization, a significant majority (90%) of semantic checks are classified as "errors," with a small number labeled as "warnings." We currently do not utilize the "note" category. This approach seems overly stringent, and I'm contemplating initiating a re-evaluation of these severity classifications.

I'm eager to learn from the experiences of other companies. Could you please share your best practices or guidelines for assigning severity in semantic checks? Your input would be greatly appreciated.

Thank you

 

by Alexander Cherednichenko
Badge for 'Answermachine' achievement
Posted on Tue, 08/13/2024 - 12:56

Hi,

If you use the standard semantic checks, then don't worry a lot ) these standard scripts are predefined for very initial/specific rules. But every project/enterprise develops its own conventions and rules. Therefore, it is necessary to develop your own checks that cover the logic of your modeling convention.

IMHO, most of the 'out-of-box' semantic checks are good as examples but not for use in 'production' (maybe one exception is the BPMN model check).  

2
by Martina Zacher Author
Posted on Thu, 08/15/2024 - 15:53

In reply to by BPS

Hi Alexander, 

thanks for your insight.

you are right, over time we did develop some customized semantic checks and I think that the severity just seem too strict. For instance, if there is a "description" missing the semantic check returns "error" - in my opinion this can be recategorized into "warning" or even "note" as we formally do not require to have a description on each BPMN task/activity  - it is optional but the semantic check in ARIS is stricter. So what I am looking for is some best practices on semantic check rules.

Can you elaborate your last point? What do you mean by most of the 'out-of-box' semantic checks are good as examples but not for use in 'production' (maybe one exception is the BPMN model check)

Thanks.

0
by Alexander Cherednichenko
Badge for 'Answermachine' achievement
Posted on Sun, 08/18/2024 - 20:14

In reply to by martina.zacher

What I mean is that all those semantic check scripts have the same syntax as standard reports (I'm sure you are aware of this).

This code could be reused in standard reports (you can copy it), but instead of starting some checks model by model, you can select, e.g., a folder and take all diagrams you want to check from all subfolders at once. And, yes, in this case, we start the report script, not the semantic check script. 

0
by M. Zschuckelt
Posted on Mon, 08/26/2024 - 13:04

In reply to by BPS

The standard output of running (standard) semantic checks on many models at once tends to become ugly. Technically you could also take the Sem-Check code and develop some nicer output for the Semantic checks you need - and still run it as Semantic check with highlighting in the models.

The decision on the severity is entirely up to you. Many people argue, that no task should be without a description. The name of the task (usually "imperative + Object name", e. g. "assess damage") is just too short in order to not being ambiguous. Only in the description the process owner will properly describe what is supposed to happen. So with the descriptions being an important part of the process owner's responsibility they are often considered as mandatory before a process is approved.

Another idea is this: The development of your processes could go through a process of several "Quality gates". In this case you might provide different semantic check profiles for each quality gate with more or less checks or severities.

2

Featured achievement

Blogger
Publish your first article.
Recent Unlocks
  • Profile picture for user kath
  • YL
  • SA
  • Profile picture for user keithbohanna
  • Profile picture for user Sabrina Bohr
  • PacMan

Leaderboard

|
icon-arrow-down icon-arrow-cerulean-left icon-arrow-cerulean-right icon-arrow-down icon-arrow-left icon-arrow-right icon-arrow icon-back icon-close icon-comments icon-correct-answer icon-tick icon-download icon-facebook icon-flag icon-google-plus icon-hamburger icon-in icon-info icon-instagram icon-login-true icon-login icon-mail-notification icon-mail icon-mortarboard icon-newsletter icon-notification icon-pinterest icon-plus icon-rss icon-search icon-share icon-shield icon-snapchat icon-star icon-tutorials icon-twitter icon-universities icon-videos icon-views icon-whatsapp icon-xing icon-youtube icon-jobs icon-heart icon-heart2 aris-express bpm-glossary help-intro help-design Process_Mining_Icon help-publishing help-administration help-dashboarding help-archive help-risk icon-knowledge icon-question icon-events icon-message icon-more icon-pencil forum-icon icon-lock