Zhu, Xiaoyun (2000) Hard vs. soft bounds in probablilistic robustness analysis and generalized source coding and optimal web layout design. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-05042006-131410
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in .pdf document. Part I: The relationship between hard vs. soft bounds and probabilistic vs. worst-case problem formulations for robustness analysis has been a source of some apparent confusion in the control community, and this thesis attempts to clarify some of these issues. Essentially, worst-case analysis involves computing the maximum of a function which measures performance over some set of uncertainty. Probabilistic analysis assumes some distribution on the uncertainty and computes the resulting probability measure on performance. Exact computation in each case is intractable in general. In the past most research focused on computing hard bounds on worst-case performance. This thesis explores the use of both hard and soft bounds in probabilistic robustness analysis, and investigates the computational complexity of the problems through extensive numerical experimentation. We focus on the simplest possible problem formulations that we believe reveal the difficulties associated with more general probabilistic analysis. By extending the standard structured singular value [...] framework to allow for probabilistic descriptions of uncertainty, probabilistic [...] is defined, which characterizes the probability distribution of some performance function. The computation of probabilistic [...] involves approximating the level surface of the function in the parameter space, which is even more complex than the worst-case [...] computation, a well-known NP-hard problem. In particular, providing sufficiently tight bounds in the tail of the distribution is extremely difficult. This thesis proposes three different methods for computing a hard upper bound on probabilistic [...] whose tightness can be tested by comparison with the soft bound provided by Monte-Carlo simulations. At the same time, the efficiency of the soft bounds can be significantly improved with the information from the hard bound computation. Among the three algorithms proposed, the LC-BNB algorithm is proven by numerical experiments to provide the best average performance on random examples. One particular example is shown in the end to demonstrate the effectiveness of the method. Part II: The design of robust and reliable networks and network services has become an increasingly challenging task in today's Internet world. To achieve this goal, understanding the characteristics of Internet traffic plays a more and more critical role. Empirical studies of measured traffic traces have led to the wide recognition of self-similarity in network traffic. Moreover, a direct link has been established between the self-similar nature of measured aggregate network traffic and the underlying heavy-tailed distributions of the Web traffic at the source level. This thesis provides a natural and plausible explanation for the origin of heavy tails in Web traffic by introducing a series of simplified models for optimal Web layout design with varying levels of realism and analytic tractability. The basic approach is to view the minimization of the average file download time as a generalization of standard source coding for data compression, but with the design of the Web layout rather than the codewords. The results, however, are quite different from standard source coding, as all assumptions produce power law distributions for a wide variety of user behavior models. In addition, a simulation model of more complex Web site layouts is proposed, with more detailed hyperlinks and user behavior. The throughput of a Web site can be maximized by taking advantage of information on user access patterns and rearranging (splitting or merging) files on the Web site accordingly, with a constraint on available resources. A heuristic optimization on random graphs is formulated, with user navigation modeled as Markov Chains. Simulations on different classes of graphs as well as more realistic models with simple geometries in individual Web pages all produce power law tails in the resulting size distributions of the files transferred from the Web sites. This again verifies our conjecture that heavy-tailed distributions result naturally from the tradeoff between the design objective and limited resources, and suggests a methodology for aiding in the design of high-throughput Web sites.
|Item Type:||Thesis (Dissertation (Ph.D.))|
|Degree Grantor:||California Institute of Technology|
|Division:||Engineering and Applied Science|
|Major Option:||Electrical Engineering|
|Thesis Availability:||Restricted to Caltech community only|
|Defense Date:||16 May 2000|
|Default Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Imported from ETD-db|
|Deposited On:||19 May 2006|
|Last Modified:||01 Aug 2014 17:17|
- Final Version
Restricted to Caltech community only
See Usage Policy.
Repository Staff Only: item control page