Coffey, John Timothy (1989) On complexity and efficiency in encoding and decoding error-correcting codes. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-06102005-114154
A central paradox of coding theory has been noted for many years, and concerns the existence and construction of the best codes. Virtually every linear code is "good" in the sense that it meets the Gilbert-Varshamov bound on distance versus redundancy. Despite the sophisticated constructions for codes derived over the years, however, no one has succeeded in demonstrating a constructive procedure which yields such codes over arbitrary symbol fields. A quarter of a century ago, Wozencraft and Reiffen, in discussing this problem, stated that "we are tempted to infer that any code of which we cannot think is good." Using the theory of Kolmogorov complexity, we show the remarkable fact that this statement holds true in a rigorous mathematical sense: any linear code which is truly random, in the sense that there is no concise way of specifying the code, is good. Furthermore, random selection of a code which does contain some constructive pattern results, with probability bounded away from zero, in a code which does not meet the Gilbert-Varshamov bound regardless of the block length of the code. In contrast to the situation for linear codes, we show that there are effectively random non-linear codes which have no guarantee on distance, and that over all rates, the average non-linear code has much lower distance than the average linear code. These techniques are used to derive original results on the performance of various classes of codes, including shortened cyclic, generalized Reed-Solomon, and general non-linear codes, under a variety of decoding strategies involving mixed burst- and random-error correction. The second part of the thesis deals with the problem of finding decoding algorithms for general linear codes. These algorithms are capable of full hard decision decoding or bounded soft decision decoding, and do not rely on any rare structure for their effectiveness. After a brief discussion of some aspects of the theory of NP-completeness as it relates to coding theory, we propose a simple model of a general decoding algorithm which is sufficiently powerful to be able to describe most of the known approaches to the problem. We provide asymptotic analysis of the complexity of various approaches to the problem under various decoding strategies (full hard decision decoding and bounded hard- and soft-decision decoding) and show that a generalization of information set decoding gives more efficient algorithms than any other approach known. Finally, we propose a new type of algorithm that synthesizes some of the advantages of information set decoding and other algorithms that exploit the weight structure of the code, such as the zero neighbours algorithm, and discuss its effectiveness.
|Item Type:||Thesis (Dissertation (Ph.D.))|
|Degree Grantor:||California Institute of Technology|
|Division:||Engineering and Applied Science|
|Major Option:||Electrical Engineering|
|Thesis Availability:||Restricted to Caltech community only|
|Defense Date:||18 May 1989|
|Default Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Imported from ETD-db|
|Deposited On:||13 Jun 2005|
|Last Modified:||26 Dec 2012 02:52|
- Final Version
Restricted to Caltech community only
See Usage Policy.
Repository Staff Only: item control page