The risk evaluation process is integrated with procedures for handling vague and numerically imprecise probabilities and utilities. A body of empirical evidence has shown that many managers would welcome new ways of highlighting catastrophic consequences, as well as means to evaluating decision situations involving high risks. When events occur frequently and their consequences are not severe, it is relatively simple to calculate the risk exposure of an organisation, as well as a reasonable premium when an insurance transaction is made, relying on variations of the principle of maximising the expected utility. When, on the other hand, the frequency of damages is low, the situation is considerably more difficult, especially if catastrophic events may occur. When the quality of estimates is poor, e.g., when evaluating low-probability/high-consequence risks, the customary use of quantitative rules together with unrealistically precise data could be harmful as well as misleading. We point out some problematic features of evaluations performed using utility theory and criticise the demand for precise data in situations where none is available. As an alternative to traditional models, we suggest a method that allows for interval statements and comparisons, which does not require the use of numerically precise statements of probability, cost, or utility in a general sense. In order to attain a reasonable level of security, and because it has been shown that managers tend to focus on large negative losses, it is argued that a risk constraint should be imposed on the analysis. The strategies are evaluated relative to a set of such constraints considering how risky the strategies are. The shortcomings of utility theory can in part be compensated for by the introduction of risk constraints.