Decision Making with a Random Walk in a Discrete Time Markov Chain

نویسندگان

  • R. Chelvier
  • G. Horton
  • C. Krull
  • B. Rauch-Gebbensleben
چکیده

The paper describes a Markov model for multi-criteria and multi-person decision making. The motivation results from a demand observed in the early stages of an innovation process. Here, many alternatives need to be evaluated by several decision makers with respect to several criteria. The model derivation and description can be split into the evaluation process and the decision process. The pair wise comparisons can be combined by weighting them according to the importance of the criteria and decision makers, resulting in a discrete-time Markov chain. A random walk on this DTMC models the decision process, where a longer state sojourn time implies a better alternative. We believe that this model meets the demands in the early stages of an innovation process. 1 Description of the problem We consider the problem of evaluating alternatives in the early stages of an innovation process. In this application area alternatives need to be evaluated by several decision makers with respect to different criteria. There are possibly many alternatives that need to be considered; therefore it is necessary to make the evaluation process fast and simple. The problem parameters are the following: • A possibly large number of alternatives may be involved • Several decision makers may be involved • Several evaluation criteria (both quantifiable and soft) may be involved • The evaluation criteria may be weighted according to relevance • The opinions of the decision makers may be weighted according to expertise • Little or no information is available about the alternatives; the decision makers base their evaluations on intuition or guesswork • The decision makers have to decide fast due to the possibly large number of alternatives The following example describes the intended application. During an innovation workshop a large number of ideas are produced. Each idea is described only with a title and a short characterisation. An innovation team must identify the top ten ideas to bring them forward to the first stage of a stage-gate process [4]. Little or no quantifiable information is available about the ideas, therefore it is not possible to rank the ideas based on objective criteria. Instead, only subjective impressions are available at this stage, enabling decisions of the form “A is better than B” with respect to a given criterion. Each member of the team might have a different area of expertise or competence, and to each of them can be assigned a different weight for different criteria. For each pair wise comparison the decision maker and the criteria is noted. The following questions need to be addressed: How to model an evaluation process and a decision process with the specified parameters? How to deal with inconsistent or non-transitive evaluations, which can occur due to the subjective nature of the evaluations? How to determine the top alternatives? 2 State of the art In the field of multi-criteria decision making (MCDM) many methods have been developed for specialised applications. Detailed information about MCDM can be found, for example in [10], [8] and [1]. Thirty available methods are discussed in [7]. Two more general methods which can be used in the early stages of an innovation process are AHP (the Analytic Hierarchy Process) [2] [11] and cost-benefit analysis (CBA) [3]. However, AHP and CBA are not directly applicable to the intended application. AHP does not support multiple decision makers, unless additional aggregation strategies are applied to merge the individual evaluation result [9] [13] [6]. Furthermore, AHP requires consistent transitive evaluations in order to compute a valid result. CBA needs measurable and quantifiable criteria to compute a valid result. However, inconsistent evaluations and soft criteria are often present in the early stages of an innovation process. Accordingly, AHP and CBA cannot be the preferred methods to evaluate alternatives under these circumstances. 3 A DTMC-based model for evaluation and decision process This section describes how to model both the evaluation process and the decision process. The evaluation process is similar that in AHP but with only one level of “is better than”. In the given application it is not applicable to use more than one level of difference, because there is no strict differentiation possible to get more detailed decisions for two reasons. Firstly, little or no information about the alternatives may be available and secondly the use of soft criteria. The model of the decision process is described using an analogue situation. 3.1 Evaluation process Based on the assumption that little or no information about the alternatives is available, the evaluation process is implemented as pair wise comparisons between all alternatives, concerning all criteria. These comparisons only ask for a decision of the following form “is better than”. This solution allows comparisons according to non measurable evaluation criteria, such as taste or preference. We assign weights to the decision makers according to their expertise and to the evaluation criteria according to their relevance, and scale these weights to sum up to one. We build a weighted directed graph, where each comparison adds an edge from the less preferred alternative to the better one. The edge weight results from the weight of the criterion and the decision maker. After adding the edges corresponding to the comparisons to the graph, we can scale the edge weights such that the sum of all outgoing edges of a node is one. The resulting graph is a discrete-time Markov chain, where the edges lead from the less preferred to the better alternatives. A detailed description of the mathematics behind the evaluation process can be found in [4]. In the evaluation process we have participants pk with k=1...K, criteria cl with l=1...L and alternatives am with m=1...M. Each participant pk makes pair wise comparisons between two alternatives am1 and am2 with respect to criterion cl. We denote this as follows: pk (cl): am1 > am2. Next we need the coefficient αkl to assign weights to each evaluation made by the participants. Each coefficient αkl contains information about the relevance of participant pk with respect to criterion cl and describes the importance of criteria cl where larger values imply greater importance. In the Matrix A of dimension KxL we store the coefficients who satisfies 0 <= αkl <= 1 and∑∑ =

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Markov Chain Anticipation for the Online Traveling Salesman Problem by Simulated Annealing Algorithm

The arc costs are assumed to be online parameters of the network and decisions should be made while the costs of arcs are not known. The policies determine the permitted nodes and arcs to traverse and they are generally defined according to the departure nodes of the current policy nodes. In on-line created tours arc costs are not available for decision makers. The on-line traversed nodes are f...

متن کامل

Arrival probability in the stochastic networks with an established discrete time Markov chain

The probable lack of some arcs and nodes in the stochastic networks is considered in this paper, and its effect is shown as the arrival probability from a given source node to a given sink node. A discrete time Markov chain with an absorbing state is established in a directed acyclic network. Then, the probability of transition from the initial state to the absorbing state is computed. It is as...

متن کامل

Relative Entropy Rate between a Markov Chain and Its Corresponding Hidden Markov Chain

&nbsp;In this paper we study the relative entropy rate between a homogeneous Markov chain and a hidden Markov chain defined by observing the output of a discrete stochastic channel whose input is the finite state space homogeneous stationary Markov chain. For this purpose, we obtain the relative entropy between two finite subsequences of above mentioned chains with the help of the definition of...

متن کامل

The Spacey Random Walk: a Stochastic Process for Higher-order Data

Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition ...

متن کامل

The Spacey Random Walk: A Stochastic Process for Higher-Order Data | SIAM Review | Vol. 59, No. 2 | Society for Industrial and Applied Mathematics

Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition ...

متن کامل

Financial Risk Modeling with Markova Chain

Investors use different approaches to select optimal portfolio. so, Optimal investment choices according to return can be interpreted in different models. The traditional approach to allocate portfolio selection called a mean - variance explains. Another approach is Markov chain. Markov chain is a random process without memory. This means that the conditional probability distribution of the nex...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009