Maximum Likelihood Analysis of the Ford–Fulkerson.docx

上传人:a**** 文档编号:9252 上传时间:2017-10-21 格式:DOCX 页数:52 大小:1.46MB
返回 下载 相关 举报
Maximum Likelihood Analysis of the Ford–Fulkerson.docx_第1页
第1页 / 共52页
Maximum Likelihood Analysis of the Ford–Fulkerson.docx_第2页
第2页 / 共52页
点击查看更多>>
资源描述

《Maximum Likelihood Analysis of the Ford–Fulkerson.docx》由会员分享,可在线阅读,更多相关《Maximum Likelihood Analysis of the Ford–Fulkerson.docx(52页珍藏版)》请在得力文库 - 分享文档赚钱的网站上搜索。

1、Algorithmica (2016) 74:12241266 1 3 DOI 10.1007/s00453-015-9998-5 Maximum Likelihood Analysis of the FordFulkerson Method on Special Graphs Ulrich Laube1 Markus E. Nebel1 Received: 17 February 2014 / Accepted: 1 April 2015 / Published online: 10 April 2015 Springer Science+Business Media New York 20

2、15 Abstract We present original average-case results on the performance of the Ford Fulkerson maxflow algorithm on grid graphs (sparse) and random geometric graphs (dense). The analysis technique combines experiments with probability generating functions, stochastic context free grammars and an appl

3、ication of the maximum like- lihood principle enabling us to make statements about the performance, where a purely theoretical approach has little chance of success. The methods lends itself to automation allowing us to study more variations of the FordFulkerson maxflow algorithm with different grap

4、h search strategies and several elementary operations. A simple depth-first search enhanced with random iterators provides the best perfor- mance on grid graphs. For random geometric graphs a simple priority-first search with a maximum-capacity heuristic provides the best performance. Notable is the

5、 observa- tion that randomization improves the performance even when the inputs are created from a random process. Keywords Analysis of algorithms Average case Generating functions FordFulkerson maxflow Maximum likelihood analysis Grid graphs Random geometric graphs Stochastic context free grammars

6、This work is supported by DFG Grants NE 1379/2-1 and NE 1379/3-1. B Ulrich Laube laubeinformatik.uni-kl.de Markus E. Nebel nebelinformatik.uni-kl.de 1 Fachbereich Informatik, Technische Universitt Kaiserslautern, Gottlieb-Daimler-Strae, 67663 Kaiserslautern, Germany Algorithmica (2016) 74:12241266 1

7、225 1 3 1 Introduction Predicting the performance and making decisions about implementation details based on worst-case bounds is a tricky business. Often there is a large gap between the worst- case bound and the actual performance of the algorithm. Thus the worst-case bounds are of limited use for

8、 a practitioner. The network-flow problem is of interest not only because numerous efficient algo- rithms for solving this problem have been devised, but also many reductions to it allow a whole range of distribution, matching and cut problems to be solved as well. In case of the FordFulkerson metho

9、d the known worst-case results are products of conservative upper bounds for the number of augmenting paths and the work per path. For example on a grid graph with 400 vertices, 760 edges and an edge capacity limit of 100 the bound for the number of augmenting path is as high as 40,000 while we actu

10、ally observe only around 35 augmenting paths with breadth-first-search (bfs) in experiments. For priority-first-search (pfs) the bound is 7000 but we only observe around 5 paths on average. Predicting the performance and making decisions about implementation details based on the bounds is futile. An

11、 alternative would be an average-case analysis, but even for an algorithm of moderate complexity as the augmenting path method, this is a difficult task. Despite being one of the better understood combinatorial optimizations problems the work on the algorithms for computing a maximum flow has mainly

12、 concentrated on giving increasingly better worst-case bounds, as summarized in 13. Experimental work on the algorithms for computing the maximum flow has an equally long history, see 14 for blocking flows or 2 for the push-relabel method. We will use a hybrid approach, first presented by the author

13、s in 17, to obtain original average-case results. The maximum likelihood analysis involves a formal model of the algorithms behavior derived from the specification of the algorithm, that is subsequently trained using simulation runs on sample inputs. One advantage is that the method lends itself to

14、automation allowing us to study more variations of the algorithm and different sample inputs set. Another advantage is that we can use independent results to crosscheck the derived model and gain confidence that it describes the algorithms behavior adequately. Furthermore the method not just counts

15、the overall number an elementary operation is executed, it provides cost contributions of the different parts of an algorithm not just confirming that one algorithms invokes less elementary operation than another but also hinting why. Last but not least, aiming at the average-case performance of an

16、algorithm allows the consideration of randomized variants of an algorithm in a homogeneous way. This method has been implemented in our tool MaLiJAn1 which already has suc- cessfully been used to investigate the new Java 7s dual pivot quicksort, see 25. In the next section a brief outline of the mai

17、n ideas of the maximum likelihood analysis is given. For additional details please refer to 17. The results presented in Sect. 9 are not only useful for practitioners as they may guide a theoretician in a common average-case analysis by perhaps ruling out certain 1 The tool can be downloaded at http

18、:/wwwagak.cs.uni-kl.de/malijan.html. 1 3 1226 Algorithmica (2016) 74:12241266 cases or giving hints which part of the algorithm influences a certain performance parameter. We selected two graph types: grid graphs and random geometric graphs. The reason for looking into grid graphs is that they have

19、a rich enough structure that allows to proof at least some results about the properties of the algorithms working on them. They are easily scalable, similar graphs may be found in practice (e.g. a pixel grid in computer graphics) and they are yet sufficiently challenging to be interesting. The order

20、 of the neighbors in an adjacency list representation is a degree of freedom that might be exploited to speed up an algorithm. Due to their regular structure grid graphs provide a nice setup to study this effect. The reason for looking into random geometric graphs is that the triangle property is of

21、ten more realistic (e.g. brain cells connected to nearby ones as are communication facilities that are distributed across an area) than the independence of the edges as in the ErdosRnyi model. Note that the algorithms we are investigating work on all graphs and they do not use any special property o

22、f grid graphs or random geometric graphs to compute the result faster on those graphs. The main questions addressed here are a comparison of the well known graph search strategies, depth-first search (dfs), breath-first search (bfs) and priority-first search (pfs) in the FordFulkerson method for det

23、ermining a maxflow. We are also looking into the typically strong dependence of a graph algorithms performance on the order of the vertecies adjacency lists by studying the effect that the use of random iterators in the graph search strategies have. They basically turn a graph algorithm into a rando

24、mized algorithm (denoted dfsrnd and bfsrnd in the remainder of this paper). 2 Maximum Likelihood Analysis This section contains a brief introduction to the maximum likelihood analysis method introduced in 17. There are three main ideas to this analysis method: First, we know from probability theory

25、that the partial derivative of a probability generating function (pgf) evaluated at 1 gives the expectation of a discrete random variable. To find the average of a performance parameter of an algorithm, we regard it as a random variable that depends on the distribution of the inputs. Our goal is the

26、n to derive the pgf from the algorithm to be analyzed and a sufficient sample of its inputs using methods from statistical inference. Second, in analytic combinatorics 10 an idea from 4 is regularly used to solve counting problems by encoding the structure of combinatorial objects into words of form

27、al languages and translating a grammar of the language into ordinary generating functions. An example how the structure of tries can be encoded with Motzkin words and used in an average-case analysis can be found in 17. Modifying this approach to incorporate stochastic context free languages, to all

28、ow non-uniform settings, enables us to translate the problem of finding the pgf into constructing an appropriate stochas- tic context free grammar, that is a context free grammar together with probabilities associated with its rules. 1 3 Algorithmica (2016) 74:12241266 1227 A A And third, for the pu

29、rpose of analyzing algorithms a regular language, that can be described by a right-linear grammar derived automatically from the source code of the algorithm, suffices. The grammar describes the traces of the algorithms execution and is augmented with probabilities for each rule that are determined

30、from a typical sample set of inputs. Here Fishers maximum likelihood principle 1 is used to tune the probabilities of this stochastic grammar to capture the non-uniform probability distribution induced by the set of inputs. We call this the maximum likelihood training of the grammar. In the followin

31、g subsections we explain how the grammar is obtained and trained and how it is transformed into a probability generating function. 2.1 A General Regular Grammar for Analyzing Algorithms For the purpose of analyzing algorithms a regular language, that can be described by a right-linear grammar derive

32、d automatically from the source code of the algorithm, is used to describe the traces of the algorithms execution (a trace is a sequence of line numbers of the executed instructions). Definition 1 (trace, trace language) Given an algorithm A specified in an low-level language (like Knuths MMIX assem

33、bly code or Java bytecode), we let LA be the set of line numbers appearing in the specification of algorithm A . The set of line numbers L will serve as alphabet. A word from L A (sequences of line numbers) that correspond to the flow of control, that can be observed when algorithm A runs on the inp

34、ut v, is called the trace of A on input v, denoted by TA (v). The set of all words TA (v) is the trace language of A (the set of all control flows), denoted by LA . The reasons for choosing a low-level representation (here Java bytecode) of the algorithm are: First, the low-level representation serv

35、es as a kind of normal form with respect to the control flow, as we only have to distinguish between unconditional and conditional jumps and all other instructions. Second, for parameters that do not depend on the particular value of the input or where it is stored in memory the trace language provi

36、des all information for the analysis of algorithm A . While we can analyze parameters like the running time in clock cycles we usually use coarser units, like the number of comparison, exchanges, visited nodes, arithmetic operations, etc., which are the typical elementary operations considered in th

37、e analysis of algorithms. We only have to assign the various costs to the operations in the specification of algorithm A . Formally this is expressed through a parameter. Definition 2 (parameter) A parameter is given by a homomorphism : L N0 with ( ) = 0 and (u w) = (u) + (w) where denotes concatena

38、tion and the empty word. If we let p(v) be the probability for input v and In the set of all inputs of size n, then the expected cost of for all inputs of size n is simply E := vIn p(v) (TA (v). (1) 1 3 1228 Algorithmica (2016) 74:12241266 Unlike the situation in analytic combinatorics, were a new f

39、ormal languagea new encodinghas to be found for every new class of combinatorial objects to be analyzed, we define a general grammar that describes the traces of an algorithms execution. Each rule is augmented with probabilities, this stochastic grammar is then a model for the algorithms behavior. T

40、he rules probabilities will be determined from a typical sample set of inputs. Definition 3 (trace grammar) A trace grammar GA = (N , T , R, P, S) is a stochas- tic context free grammar (scfg) for algorithm A with line numbers LA = 1,., m, where N = L1 = S, L2, . . . , Lm+1 is a set of variables or

41、non-terminal symbols. T = 1,., m is a set of terminal symbols disjoint from N . The set of rules R = r1,., rm+1 is a subset of N (T N ). Its elements r j = (Li , j ) are also called productions. Additionally we denote by Ri = r j | r j = (Li , j ) R the set of rules with the same left-hand side Li .

42、 The mapping P from R into (0, 1 assigns each rule r j its probability p j . We also write Li p j : j for a rule r j = (Li , j ) R with P(r j ) = p j . We require that j |r j Ri p j = 1, i = 1, 2, . . . , m + 1 holds. This ensures that we have a probability distribution on the rules with the same le

43、ft-hand side. The symbol S = L1 N is a distinguished variable called the axiom or start symbol. All sets in this definition are finite. Let L (GA ) be the language generated by the trace grammar GA , that is the set of all terminal strings or words which can be generated by successively substituting

44、 all non-terminals according to the productions starting with the axiom S. Such a successive substitution is called a derivation and is written as , if there is a possibly empty sequence of rules ri1 , . . . , rik that leads from to . The probabilities of the rules now induce probabilities on the de

45、rivations by multiplying the involved rules probabilities p = pi1 . pik . This can be extended to the whole set of terminal strings, that is the whole language, by summing up all probabilities of the different left-most derivations of each terminal string. For the trace grammar GA to actually genera

46、te the trace language LA L (GA ) we have to describe how the low-level specification of the algorithm A is translated into the set of grammar rules. Each line of the low-level code is translated according to the following three simple rules: 1. An unconditional jump from line i to line j is expresse

47、d by the rule Li 1 : i L j . 2. A conditional jump in line i that may jump to line j is expressed by the rule Li pi : i L j | 1 pi : i Li +1. 3. All other instructions yield the rule Li 1 : i Li +1. A last production Lm+1 1 : is added to allow the grammar to generate terminal strings, that are obvio

48、usly sequences of line numbers of the executed instructions. The parameter homomorphism will reduce the amount of information further and focus on the chosen elementary operations, but we are free to use all the details from these 1 3 Algorithmica (2016) 74:12241266 1229 terminal strings to count fo

49、r example the number of memory access if required. By design we ignore the contents of registers or memory locations. Consequently we have the following lemma. Lemma 1 A trace grammar GA build according to the three rules presented above generates the trace language of a given algorithm A , that is LA L (GA ). Further- more the grammar can be ef

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 应用文书 > 毕业论文

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知得利文库网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号-8 |  经营许可证:黑B2-20190332号 |   黑公网安备:91230400333293403D

© 2020-2023 www.deliwenku.com 得利文库. All Rights Reserved 黑龙江转换宝科技有限公司 

黑龙江省互联网违法和不良信息举报
举报电话:0468-3380021 邮箱:hgswwxb@163.com